Review Article | Open Access

Generative AI in Scholarly Writing: Opportunities and Ethical Dilemmas

    Yogesh Subhash Chaudhari

    Dr. L.H. Hiranandani College of Pharmacy, Ulhasnagar, India

    Manisha Yogesh Chaudhari

    D.Y. Patil University, School of Pharmacy, Navi Mumbai, India

    Harshal Ashok Pawar

    Dr. L.H. Hiranandani College of Pharmacy, Ulhasnagar, India


Received
10 Jan, 2025
Accepted
08 May, 2025
Published
11 May, 2025

This article examines the intersection of generative AI and academic publishing, focusing on its potential to enhance productivity, accessibility, and collaboration. Generative AI facilitates tasks such as generating first drafts, refining language for non-native speakers, and personalizing research insights. However, its integration raises significant challenges, including ethical dilemmas around authorship, risks of plagiarism, and the generation of inaccurate or biased content. The article explores case studies to illustrate both the opportunities and pitfalls of using AI in scholarly communication. It emphasizes the importance of developing ethical guidelines, ensuring transparency, and fostering responsible use of AI tools. By addressing these challenges, the academic community can balance innovation with integrity, leveraging AI to improve the efficiency and inclusivity of research dissemination. The findings of this article highlight the transformative potential of generative AI while underscoring the need for collaborative efforts among researchers, publishers, and policymakers to navigate its ethical and practical implications. This balanced approach is crucial to harnessing AI’s benefits while safeguarding the core values of academic integrity and quality.

Copyright © 2025 Chaudhari et al. This is an open-access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 

INTRODUCTION

Generative AI, particularly through large language models (LLMs) like ChatGPT, is increasingly influencing scholarly writing by offering tools that enhance writing processes and pedagogical practices. In academic settings, generative AI is used to support students in various writing tasks, acting as a multi-tasking assistant, virtual tutor, and digital peer, which helps improve writing performance and the affective domain of students1. Workshops and educational programs have been developed to integrate these tools into curricula, emphasizing ethical and effortful collaboration between humans and AI to enhance academic skills without compromising creativity2. In the realm of computer science, conferences are actively developing policies to navigate the integration of generative AI in scholarly writing, focusing on maintaining academic integrity and addressing ethical concerns3. Despite its benefits, generative AI in academic writing is not without challenges. It often lacks the depth of human insight and can introduce biases and uncertainties due to its reliance on specific datasets and parameters4. Peer reviewers have

noted that while AI-augmented writing can improve readability and informativeness, it may lack the “human touch” and subjective expressions that are crucial in scholarly work. In medical writing, generative AI is seen as a tool to streamline processes, allowing writers to focus on strategic initiatives while maintaining oversight to ensure quality and ethical standards4. The integration of generative AI in English-as-a-foreign-language (EFL) education highlights its potential to transform academic writing courses, though it raises concerns about plagiarism and data privacy5. Overall, the use of generative AI in scholarly writing is a double-edged sword, offering significant opportunities for enhancing writing skills and efficiency while posing ethical and practical challenges that require careful consideration and balanced approaches.

Examining the role of generative AI in scholarly communication is crucial due to its transformative potential and the challenges it presents across various academic domains. Generative AI tools, such as ChatGPT, have been increasingly integrated into academic settings, offering significant advantages like reducing researchers’ workloads, enhancing the quality of scholarly outputs, and democratizing complex analytical processes. These tools facilitate various academic tasks, including literature reviews, content generation, and data analysis, thereby streamlining the research process and improving efficiency6. However, the integration of AI in academia also raises concerns about research integrity, ethical implications, and the potential decline in the quality of scientific papers due to issues like plagiarism and poor citation practices7. Early career researchers, in particular, are at the forefront of these changes, as they are more open to adopting new technologies but also face challenges related to scholarly disparities and inequalities. Furthermore, the role of AI in academic writing and communication is evolving, with AI acting as a collaborative tool that requires careful, prompt design and human oversight to ensure responsible use. The impact of AI on peer review processes and the need for ethical frameworks to guide its use in scholarly publishing are also significant considerations. Additionally, students perceive generative AI as beneficial for developing academic communication skills, although they acknowledge limitations in fostering critical thinking and creativity. As AI continues to reshape the landscape of scholarly communication, academic institutions must balance the benefits of AI with the ethical and practical challenges it poses, ensuring that AI serves as an assistant rather than a replacement in the academic process.

This study aims to critically evaluate the dual role of generative AI in scholarly writing, highlighting its potential to enhance productivity and accessibility while addressing ethical concerns such as plagiarism, authorship ambiguity, and misinformation. The objective is to propose actionable frameworks for integrating AI responsibly into academic workflows, ensuring alignment with core values of transparency and integrity.

Opportunities offered by generative AI: The AI-generated summaries and abstracts can significantly enhance the clarity and impact of research manuscripts by improving factual consistency, accessibility, and readability. The integration of semantic graphs and entity pointer networks, addresses the issue of factual inaccuracies in AI-generated summaries, ensuring that the summaries remain faithful to the original documents8. Moreover, AI tools like ChatGPT have been shown to generate lay summaries that are not only accurate but also more accessible and transparent than traditional abstracts, as demonstrated in the study by Shyr et al.9 which highlights the potential for AI to facilitate the dissemination of research findings to a broader audience. The use of advanced models such as LED_Large and Pegasus variants further refines the summarization process, capturing critical details and improving semantic understanding, which is crucial for maintaining the integrity of the research content8. Abstractive summarization techniques, which generate new language rather than extracting content, offer more natural and coherent summaries, enhancing the readability and fluency of complex scientific texts10. Additionally, AI-generated summaries have been found to improve knowledge retention compared to human-written summaries, as they are

often clearer and more engaging, thus fostering better understanding among readers. The use of transformer models and other AI techniques in summarization not only aids in quickly grasping the essence of large volumes of research literature but also ensures that the summaries include important conclusions and outcomes, thereby enhancing the overall impact of the research8. However, while AI-generated summaries offer numerous benefits, it is crucial to validate their coherence and accuracy to prevent potential misinterpretations11.

Generative AI significantly assists non-native English speakers (NNES) in research and writing by providing tools that enhance language proficiency and communication. The AI writing assistants, such as Wordtune and ChatGPT, offer NNES users the ability to paraphrase, organize, and proofread text, which is particularly beneficial in academic and professional settings where precise language use is crucial12. These tools help NNES overcome linguistic challenges by offering personalized feedback and rewriting suggestions that align with the user’s intended message, thereby improving fluency and reducing the risk of misinterpretation in digital communications. In educational contexts, AI-driven tools facilitate English language learning by providing adaptive learning systems and conversational agents that offer personalized feedback, thus enhancing learner engagement and language acquisition. Studies have shown that tools like ChatGPT can improve English writing proficiency among NNES by offering interactive writing tasks and feedback, which positively impacts their learning experiences and perceptions13. Furthermore, AI tools are leveraged in academic settings to assist NNES students in understanding complex terminology and concepts, as seen in introductory computer science courses where LLM tutors provide accessible and conversational support. Despite these benefits, challenges remain, such as the need for NNES to critically evaluate AI-generated content to maintain its authentic voice and ensure the accuracy of the information.

Pitfalls and ethical concerns: Generative AI poses significant risks related to the generation of unoriginal or plagiarized content, primarily due to its reliance on existing data for training and content creation. One of the primary concerns is copyright infringement, as generative AI systems often utilize copyrighted materials from the internet to generate new works, which can lead to legal complexities regarding authorship and ownership rights. This issue is particularly pronounced in jurisdictions like China, where the legal framework for AI-generated content is still evolving, and there is a need for reforms to address these challenges effectively14. Additionally, the development of tools such as the Copyright Risk Checker (CRC) aims to assess and mitigate these risks by providing preliminary evaluations of potential copyright issues in AI-generated content, which can then be further analyzed by legal experts15. Beyond legal concerns, the ethical implications of AI-generated content are significant, as these systems can produce fabricated citations and references, undermining academic integrity and necessitating robust verification methods to ensure the authenticity of AI-generated research articles. Furthermore, the concept of “Text Laundering” highlights the potential for authors to obscure the use of AI in content creation, raising ethical questions about authorship and the integrity of scientific publications. The risks associated with generative AI extend to the cultural and creative industries, where issues such as privacy, misinformation, and intellectual property breaches are prevalent, necessitating new risk management strategies and upskilling to address these challenges.

Generative AI frequently produces scientifically inaccurate or misleading information, a concern highlighted across multiple studies. For instance, in the realm of fluid dynamics, generative AI models such as Midjourney and Dall-E have been found to generate images that do not accurately represent fluid motion phenomena, potentially misleading students and educators in the field16. This issue extends beyond visual representations; generative AI’s text outputs can also be flawed due to inaccuracies in training data and the phenomenon of hallucination, where models generate information not grounded in reality. The spread of misinformation by generative AI is not limited to scientific inaccuracies but also includes broader societal impacts, such as influencing public opinion during election cycles and spreading false medical information17. The p otential for harm is significant, as misinformation can affect

decision-making, public health, and democratic processes18. Despite efforts to mitigate these issues through explainable AI and responsible AI practices, the challenge of misinformation persists, necessitating the development of verifiable generative AI models to ensure output accuracy. Moreover, the propensity of generative AI to produce disinformation varies across models and contexts, with some models like GPT-4o being more prone to generating harmful content compared to others like Copilot and Gemini19.

The question of authorship when generative AI significantly contributes to a manuscript is complex and multifaceted, involving ethical, legal, and practical considerations. According to Faiyazuddin et al.1, the degree of AI assistance impacts perceptions of human authorship, but AI itself is not typically seen as warranting authorship, creatorship, or responsibility, unlike human assistants. The AI-generated works should be protected by copyright due to their originality, but authorship should be attributed to the users of the AI, as AI cannot be considered an author under current legal frameworks. Sharifzadeh20 explores the philosophical dimensions, suggesting that while AI like ChatGPT could theoretically meet co-authorship criteria, it lacks moral agency and accountability, which are essential for authorship. Faiyazuddin et al.1 emphasizes the importance of distinguishing between human and AI-generated text, highlighting the need for transparency in AI’s role in manuscript preparation. Hinds and Miller21 assert that AI tools do not meet the standards of authorship as defined by the International Committee of Medical Journal Editors, which requires accountability for all aspects of the content. Wang19 note that while AI can assist in manuscript preparation, it cannot replace the human-driven process necessary for high-quality scholarly work. Crawford et al.22 reinforce that non-human authorship does not constitute authorship, advocating for AI to be used as a supportive tool rather than a replacement for human authorship. The implications of AI in creative processes, questioning the minimal requirements for authorship when AI contributes significantly23.

Future perspectives: To ensure responsible AI usage, a comprehensive set of policies and guidelines must be developed, integrating insights from various academic perspectives. A key recommendation is the establishment of Responsible Access Policies (RAPs), which involve transparent procedures for model access decisions, including empirical evaluations of model capabilities and risk assessments of user categories. Additionally, the global nature of AI necessitates international collaboration to develop standardized safety guidelines, as emphasized by the need for a global agency to oversee AI technology’s responsible use24. In developing countries, a tailored AI policy framework is crucial, focusing on infrastructure development, capacity building, ethical governance, and international cooperation to align local policies with global standards25. Ethical AI use in research requires moving beyond abstract principles to practical strategies, such as understanding model biases, respecting privacy, and ensuring transparency and reproducibility. Furthermore, responsible AI guidelines should be grounded in regulations and usable across various roles, promoting a design-first approach that embeds ethical considerations throughout the AI development lifecycle. Addressing algorithmic biases, particularly those related to skin color, gender, and age, is essential, as demonstrated by initiatives in Jordan to mitigate such biases through ethical guidelines and regulations26. The principles of accountability and transparency are pivotal in mitigating risks and fostering a culture of responsibility among stakeholders27. Finally, robust governance structures are necessary to ensure transparency, accountability, and fairness in AI systems, as these principles are critical for navigating the ethical complexities of AI development and maintaining societal trust28. By integrating these diverse elements, a holistic approach to responsible AI usage can be achieved, balancing innovation with ethical integrity.

DISCUSSION

Policymakers and stakeholders can effectively collaborate to mitigate the negative consequences of AI’s pitfalls by adopting a multifaceted approach that integrates policy development, ethical frameworks, and international cooperation. One strategy involves using generative scenario writing methods to evaluate the efficacy of policies in mitigating AI’s negative impacts, as demonstrated by the use of large language models to simulate policy impacts and assess their effectiveness across various dimensions, such as severity and specificity to vulnerable populations. Additionally, stakeholders must work together to create ethical frameworks that ensure AI strengthens democratic processes rather than undermines them, addressing concerns about accountability, transparency, and manipulation risks in political contexts. The development of a taxonomy of harms associated with AI likeness generation can guide policymakers in addressing specific societal challenges, emphasizing the need for context-specific mitigations and distinguishing between generation and distribution of likeness. Furthermore, fostering collaboration among academia, industry, and policymakers is crucial for addressing AI’s complexities, balancing regulation with innovation, and ensuring ethical use through open-source approaches. Policymakers should also focus on mitigating AI-induced inequality by enhancing human-AI collaboration, strengthening worker power, and adjusting tax codes to discourage the automation of human labor. Addressing unobserved confounding in human-AI collaboration through robust policy frameworks can improve the reliability of outcomes, leveraging diverse expertise and mitigating biases. Proactive strategies to address explainability pitfalls in AI systems are necessary to prevent unintended negative effects and recalibrate stakeholder empowerment. On a global scale, forming an international body to standardize AI technology and ensure responsible use is essential, as AI’s impact transcends national borders. Governments can enhance AI risk management through interconnected post-deployment monitoring, collecting data to inform impact assessments, and managing AI risks effectively. Finally, developing countries can benefit from a comprehensive AI policy framework that emphasizes infrastructure development, capacity building, and ethical governance while fostering international cooperation to align local policies with global standards. By integrating these strategies, policymakers and stakeholders can collaboratively address AI’s challenges and harness its potential for societal benefit.

CONCLUSION

Generative AI offers transformative opportunities for streamlining scholarly communication, including automated summarization, language refinement, and democratizing research access. However, its use necessitates robust ethical guidelines to mitigate risks of plagiarism, inaccuracies, and authorship disputes. Collaborative efforts among researchers, publishers, and policymakers are vital to establish accountability mechanisms and transparency standards. By balancing innovation with integrity, the academic community can harness AI’s potential while safeguarding the credibility of scholarly work. This equilibrium is essential for fostering trust and inclusivity in AI-augmented research ecosystems.

SIGNIFICANCE STATEMENT

Generative AI has emerged as a powerful tool in academic writing, offering researchers opportunities to enhance their productivity, improve accessibility, and streamline scholarly communication. By assisting in tasks such as drafting, editing, and organizing research, these tools can help scientists focus more on innovation and discovery. However, the use of AI in scholarly writing also raises significant concerns, including risks of plagiarism, inaccuracies, and ethical challenges surrounding authorship and accountability. This article explores both the potential and the pitfalls of generative AI in transforming the way research is communicated. It emphasizes the importance of responsible use, clear policies, and ethical frameworks to ensure that AI benefits society without compromising the integrity of academic work. By addressing these challenges and leveraging AI’s strengths, researchers, educators, and policymakers can shape a future where AI contributes meaningfully to advancing knowledge and making research more accessible to everyone. This balance is vital for ensuring that the use of generative AI aligns with the core values of transparency, fairness, and collaboration in science.

ACKNOWLEDGMENTS

The authors would like to acknowledge the HSNC Board, Mumbai, Dr. Parag Gide, Principal Dr. L.H. Hiranandani, College of Pharmacy, Ulhasnagar, and Dr. Rakesh Somani, Principal, D.Y. Patil, University School of Pharmacy, Navi Mumbai, for constant guidance and motivation.

REFERENCES

  1. Faiyazuddin, M., A.D. Gholap, S. Maqsood, S. Abbas, Y.S. Chaudhari, S. Sharmila and T.J. Webster, 2024. Prospects and ethics of ChatGPT in biomedical research and medical practice: An emphasis on nanotechnology. J. Biomed. Nanotechnol., 20: 1661-1678.
  2. Amos, J., 2024. Ethical and effortful: Workshopping human and generative AI academic writing collaborations. J. Learn. Dev. Higher Educ., 32.
  3. Stella, F., C.D. Santina and J. Hughes, 2023. How can LLMs transform the robotic design process? Nat. Mach. Intell., 5: 561-564.
  4. Ramachandran, R.K., T.J. Linenbach, C.J. Ceppi and A.G. Modi, 2024. Reimagining clinical and regulatory medical writing with generative AI. Am. Med. Writers Assoc., 39.
  5. Tian, C., 2024. The influence of generative AI technologies on academic writing in EFL education. J. Educ. Humanit. Social Sci., 28: 575-584.
  6. Panda, S. and N. Kaur, 2024. Exploring the role of generative AI in academia: Opportunities and challenges. IP Indian J. Lib. Sci. Inf. Technol., 9: 12-23.
  7. Edam, S.M.I., 2025. Restructuring the Landscape of Generative AI Research. In: Impacts of Generative AI on the Future of Research and Education, Mutawa, A. (Ed.), IGI Global, Hershey, Pennsylvania, ISBN: 9798369308844, pp: 287-334.
  8. Blekanov, I.S., N. Tarasov and S.S. Bodrunova, 2022. Transformer-based abstractive summarization for Reddit and Twitter: Single posts vs. comment pools in three languages. Future Internet, 14.
  9. Shyr, C., R.W. Grout, N. Kennedy, Y. Akdas and M. Tischbein et al., 2024. Leveraging artificial intelligence to summarize abstracts in lay language for increasing research accessibility and transparency. J. Am. Med. Inf. Assoc., 31: 2294-2303.
  10. Zaman, F., M. Afzal, P.S. Teh, R. Sarwar and F. Kamiran et al., 2024. Intelligent abstractive summarization of scholarly publications with transfer learning. J. Inf. Web Eng., 3: 256-270.
  11. Schmitz, B., 2023. Improving accessibility of scientific research by artificial intelligence-an example for lay abstract generation. Digital Health, 9.
  12. Zhao, X., 2023. Leveraging artificial intelligence (AI) technology for english writing: Introducing wordtune as a digital writing assistant for EFL writers. RELC J., 54: 890-894.
  13. Masoudi, H.A., 2024. Effectiveness of ChatGPT in improving English writing proficiency among non-native English speakers. Int. J. Educ. Sci. Arts, 3: 62-84.
  14. Zhou, T. and M.R. Abd Rahman, 2024. Legal perspective on the risk of copyright infringement by AI-generated contents in China. J. Undang-Undang Masyarakat, 34: 141-153.
  15. Billiris, G., A. Gill, I. Oppermann and M. Niazi, 2024. Towards the development of a copyright risk checker tool for generative artificial intelligence systems. Digital Gov.: Res. Pract., 5.
  16. Akhtar, P., A.M. Ghouri, H.U.R. Khan, M. Amin ul Haq and U. Awan et al., 2023. Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions. Ann. Oper. Res., 327: 633-657.
  17. Monteith, S., T. Glenn, J.R. Geddes, P.C. Whybrow, E. Achtyes and M. Bauer, 2024. Artificial intelligence and increasing misinformation. Br. J. Psychiatry, 224: 33-35.
  18. Marushchak, A., S. Petrov and A. Khoperiya, 2025. Countering AI-powered disinformation through national regulation: Learning from the case of Ukraine. Front. Artif. Intell., 7.
  19. Wang, H., 2023. Authorship of artificial intelligence-generated works and possible system improvement in China. Beijing Law Rev., 14: 901-912.
  20. Sharifzadeh, R., 2024. ChatGPT as co-author? AI and research ethics. Ethics Prog., 15: 155-173.
  21. Hinds, P.S. and A.B. Miller, 2023. Our words and the words of artificial intelligence: The accountability belongs to us. Cancer Care Res. Online, 3.
  22. Crawford, J., M. Cowling, S. Ashton-Hay, J.A. Kelder, R. Middleton and G. Wilson, 2023. Artificial Intelligence and Authorship Editor Policy: ChatGPT, bard bing AI, and beyond. J. Univ. Teach. Learn. Pract., 20.
  23. Novelli, C., M. Taddeo and L. Floridi, 2024. Accountability in artificial intelligence: What it is and how it works. AI Soc., 39: 1871-1882.
  24. Chaturvedi, S. and A. Kumar, 2024. Responsible AI growth with safety: Exploring global and national policy discourse. J. Inf. Knowl., 61: 231-237.
  25. Folorunso, A., K. Olanipekun, T. Adewumi and B. Samuel, 2024. A policy framework on AI usage in developing countries and its impact. Global J. Eng. Technol. Adv., 21: 154-166.
  26. Ibrahim, S.M., M. Alshraideh, M. Leiner, I.M. AlDajani and O. Bettaz, 2024. Artificial intelligence ethics: Ethical consideration and regulations from theory to practice. IAES Int. J. Artif. Intell., 13: 3703-3714.
  27. Meduri, K., S. Podicheti, S. Satish and P. Whig, 2024. Accountability and Transparency Ensuring Responsible AI Development. In: Ethical Dimensions of AI Development, Bhattacharya, P., A. Hassan, H. Liu and B. Bhushan (Eds.), IGI Global, Hershey, Pennsylvania, ISBN: 9798369341476, pp: 83-1025.
  28. Sistla, S., 2024. AI with integrity: The necessity of responsible AI governance. J. Artif. Intell. Cloud Comput., 3.

How to Cite this paper?


APA-7 Style
Chaudhari, Y.S., Chaudhari, M.Y., Pawar, H.A. (2025). Generative AI in Scholarly Writing: Opportunities and Ethical Dilemmas. Trends in Scholarly Publishing, 4(1), 22-28. https://doi.org/10.17311/tsp.2025.22.28

ACS Style
Chaudhari, Y.S.; Chaudhari, M.Y.; Pawar, H.A. Generative AI in Scholarly Writing: Opportunities and Ethical Dilemmas. Trends Schol. Pub 2025, 4, 22-28. https://doi.org/10.17311/tsp.2025.22.28

AMA Style
Chaudhari YS, Chaudhari MY, Pawar HA. Generative AI in Scholarly Writing: Opportunities and Ethical Dilemmas. Trends in Scholarly Publishing. 2025; 4(1): 22-28. https://doi.org/10.17311/tsp.2025.22.28

Chicago/Turabian Style
Chaudhari, Yogesh, Subhash, Manisha Yogesh Chaudhari, and Harshal Ashok Pawar. 2025. "Generative AI in Scholarly Writing: Opportunities and Ethical Dilemmas" Trends in Scholarly Publishing 4, no. 1: 22-28. https://doi.org/10.17311/tsp.2025.22.28