Research Article | Open Access

Editorial Gatekeeping in the Age of Generative AI

    Laura Dormer ORCID

    Becaris Publishing Ltd, United Kingdom

    Alaina Webster

    American Society for Clinical Pharmacology and Therapeutics, United States of America

    Jonathan Schultz ORCID

    American Heart Association, United States of America


Received
24 Nov, 2025
Accepted
12 Jan, 2026
Published
31 Jan, 2026

Generative artificial intelligence (AI) is transforming scholarly publishing and redefining the role of editorial gatekeeping. As AI systems become increasingly embedded in manuscript preparation, peer review, and editorial workflows, traditional policies focused on authorship disclosure are proving insufficient. The challenge now lies in governing how AI is used, ensuring transparency, accountability, and human oversight in processes that directly shape research credibility. This Perspective argues that sustainable editorial practice in the age of AI requires hybrid models where automation supports, rather than supplants, human judgment. By strengthening governance frameworks, promoting AI literacy, and reaffirming integrity as the guiding principle of peer review, the scholarly community can harness innovation without compromising trust at the core of science.

Copyright © 2026 Dormer et al. This is an open-access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 

INTRODUCTION

The arrival of generative Artificial Intelligence (AI) at the end of 2022 marked a turning point for scholarly publishing. Within months, tools capable of generating fluent text, summarizing arguments, checking references, and analyzing patterns began influencing how manuscripts were written, reviewed, and evaluated. This rapid shift challenged long standing assumptions about authorship, reviewer responsibilities, and the role of editors as custodians of research integrity. What initially appeared to be a narrow conversation about whether AI can be an author quickly expanded into deeper questions: How should AI be governed across the publication lifecycle? What forms of disclosure are adequate? How can transparency be ensured when AI systems operate at scale and with opaque internal mechanisms1.

Editorial gatekeeping, traditionally grounded in expert judgment, ethical oversight, and trust-must now adapt to an environment in which machine-generated content intersects with human scholarship2. This Perspective explores how generative AI is reshaping editorial practice and proposes a model of responsible, hybrid collaboration where automation enhances efficiency while preserving the human values essential to scientific credibility.

RETHINKING EDITORIAL POLICIES FOR A RAPIDLY SHIFTING LANDSCAPE

Early editorial responses focused on authorship rules, requiring disclosure of AI-assisted writing and clarifying that AI tools cannot be credited as authors. While these steps were an important starting point, they are insufficient for the expanding capabilities of AI. Current systems now influence peer review, language clarity, research integrity screening, data validation, and preliminary decision-making3,4.

To govern these expanding use cases responsibly, policies must evolve from static, single purpose rules into dynamic frameworks guided by three principles:

  Transparency: Authors, reviewers, and editors must disclose AI use clearly and consistently
  Human accountability: Final responsibility for manuscripts and reviews must remain with human contributors
  Adaptability: Policies must be revisited regularly as new AI tools emerge

Editorial governance is no longer solely about preventing misconduct-it must actively shape the ethical integration of automation into scholarly workflows.

EDITOR-REVIEWER DYNAMIC: REBUILDING TRUST AND EXPECTATIONS

One of the clearest signs of change is the evolving editor-reviewer relationship. Reviewers are already using AI to structure feedback, verify references, or improve clarity. The question is no longer whether AI will be used, but how journals can shape its responsible adoption. Clear, well-communicated policies that define acceptable applications are essential to sustaining transparency and trust.

Early experiments by larger publishers illustrate one possible pathway. Some have integrated secure AI assistants directly into manuscript systems. These tools can automate formatting checks and reference validation while protecting confidentiality. Early evidence suggests such tools may alleviate reviewer workload, an important consideration given the evidence of widespread reviewer fatigue1,5.

Yet these efficiencies must not obscure the limits of automation. AI cannot replicate the contextual judgment or disciplinary intuition that peer review demands. It can assist, but not replace, critical interpretation6. The continuity of editorial trust depends on ensuring that AI remains a support mechanism, enhancing speed and accuracy without eroding the essential role of human expertise.

OPERATIONAL REALITIES: INSIDE THE EDITORIAL OFFICE

While broad policies are being developed, editorial offices encounter the operational effects of AI adoption every day. The pace of change is fast, and guidance often shifts before it can be fully implemented. Therefore, policies often lag behind the technology they seek to regulate, leaving editors and reviewers uncertain about what constitutes appropriate use, creating uncertainty for editorial staff and confusion for authors and reviewers. Generative AI has changed submission dynamics. Several journals are reporting an increase in letters to the editor and short-form commentaries, which are easier to generate with the assistance of AI. Even when such submissions are rejected, they add to the workload of already stretched editorial teams.

AI can nonetheless support efficiency if applied judiciously. Tools capable of detecting papermill patterns, triaging/desk-rejecting unsuitable manuscripts, or identifying language irregularities can reduce reviewer burden and improve the quality of manuscripts entering peer review6. AI can also reduce language barriers, enabling more inclusive participation in peer review and making feedback clearer across linguistic divides7. At the same time, clear safeguards are needed. Uploading confidential material to public/open AI systems breaches confidentiality and intellectual property protections, violations that editorial offices must explicitly prohibit through policy and education.

SMALL AND INDEPENDENT PUBLISHERS: STRATEGIC ADOPTION AND SAFEGUARDS

For smaller publishers, the rapid spread of AI tools poses a different set of challenges. Limited resources and technical capacity make it difficult to evaluate or adopt costly AI solutions. It is not realistic for small publishers to adopt every new tool. Instead, they must focus on areas where AI can address specific problems, such as reviewer matching or language support.

Data privacy and security are particularly important. Publishers must ensure that manuscripts remain confidential and that ethical standards are maintained. Aligning with international bodies such as the Committee on Publication Ethics (COPE) guidance on AI use provides an ethical foundation for responsible implementation within constrained contexts3. Whether large or small, all publishers face the same imperative: to establish guardrails that make AI use transparent, accountable, and fair.

BUILDING GUARDRAILS: TRANSPARENCY, BIAS, AND HUMAN OVERSIGHT

Transparency will be central to building trust in AI-supported peer review. Editors must communicate to authors and reviewers how AI is already embedded in workflows, whether for reference validation, plagiarism screening, manuscript triage, or reviewer selection. This openness helps preserve trust and clarifies where human expertise remains essential.

When reviewers understand what is automated, they can focus on the evaluative aspects of their role.

At the same time, peer review must retain a human in the loop. While AI can flag technical inconsistencies, detect anomalies, or highlight potential issues, it cannot assess novelty, significance, or contextual relevance. These judgments remain the domain of human reviewers. Editorial policies should guide reviewers toward the aspects of evaluation where their insight adds the greatest value, ensuring that technology supports rather than substitutes intellectual judgment.

AI will undoubtedly reshape aspects of editorial practice, but it must never obscure accountability. Systems lacking explainability or traceability risk eroding trust in editorial decisions. All AI-assisted processes should therefore remain transparent, auditable, and governed by clear human oversight. Bias detection, data provenance, and continuous auditing must form part of every journal’s quality assurance framework7,8. The objective is not to eliminate AI from peer review, but to govern it wisely. Some publishers are introducing dedicated AI oversight editors, akin to statistical or ethics editors, to evaluate how tools are deployed and maintained, a step that may soon become integral to safeguarding editorial integrity.

Table 1 shows the integration of human judgment and AI-assisted processes across the manuscript lifecycle.

FUTURE DIRECTIONS: GOVERNANCE, EQUITY, AND GLOBAL INCLUSION

The rise of generative AI requires editors to revisit long-standing assumptions about peer review. Blanket prohibitions on AI are unlikely to succeed and are already being reconsidered. Instead, journals will need thoughtful policies that balance efficiency and inclusivity with human accountability9,10.

Table 1: Hybrid human-AI editorial workflow model
Stage 1: Submission-AI screens for plagiarism, formatting, citation accuracy; editor validates results
Stage 2: Triage-AI flags potential issues (data anomalies, papermill signals); editors decide desk rejection or review
Stage 3: Peer Review-Reviewers may use AI for summarization or clarity; intellectual judgment remains human
Stage 4: Decision-AI provides structured reports; editors make final decisions
Stage 5: Production-AI checks metadata and consistency; production editors conduct final verification. This figure illustrates the principle of AI as an assistant, not a decision-maker, ensuring accountability and oversight

The future of peer review will depend on whether AI is treated as a collaborator rather than a threat. Evidence suggests that AI is most effective when used as an assistant that reduces workload and enhances quality, while leaving judgment and accountability with humans11. The task for editors and reviewers is to identify where their expertise is irreplaceable, and to build workflows that keep human judgment at the center of scholarly publishing.

In redefining gatekeeping for the age of AI, editors have the opportunity not merely to safeguard integrity, but to reshape the very ethos of peer review for the next generation of science. As the boundaries of editorial work continue to expand, the community must also confront deeper cultural questions: What kind of peer review ecosystem do we want to build for the next generation of researchers? The rise of AI is exposing long-standing pressures, hyper-competition, reviewer fatigue, inequitable participation, and a growing emphasis on speed over rigor that technology alone cannot resolve. AI tools may help streamline workflows, but they cannot determine what fairness looks like, nor can they address structural inequities in global research ecosystems. These are human responsibilities that require deliberate editorial leadership. Framing AI adoption within a broader vision of equity, inclusivity, and responsibility ensures that technology enhances, not distorts, the core mission of scholarly publishing. In this sense, the challenge is not only technical but cultural: to cultivate an editorial ethos that remains resilient, principled, and globally representative in an era of rapid automation.

CONCLUSION

Generative AI represents a profound transformation in scholarly publishing, reshaping how manuscripts are written, evaluated, and curated. Although AI can enhance efficiency, detect misconduct, and expand participation, it also introduces risks related to transparency, bias, and accountability. The future of editorial gatekeeping lies in hybrid models where human judgment and machine intelligence operate together, each reinforcing the other’s strengths. Editors play a pivotal role in guiding this transition, ensuring that innovation proceeds without compromising integrity. By embracing AI as a tool of stewardship rather than replacement, the editorial community can build a peer review ecosystem that is more transparent, inclusive, and trusted.

SIGNIFICANCE STATEMENT

This study discovered the emerging dynamics of generative AI-assisted editorial workflows that can be beneficial for strengthening transparency, accountability, and decision-making in scholarly publishing. By examining how hybrid models of human-AI collaboration can reinforce editorial integrity, the study highlights key governance mechanisms essential for responsible adoption. This study will help researchers to uncover the critical areas of AI-driven gatekeeping that many were not able to explore. Thus, a new theory on sustainable editorial stewardship may be arrived at.

REFERENCES

  1. Wicherts, J.M., 2016. Peer review quality and transparency of the peer-review process in open access and subscription journals. PLoS ONE, 11.
  2. Editorials, 2023. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature, 613: 612-612.
  3. Ross-Hellauer, T. and E. Görögh, 2019. Guidelines for open peer review implementation. Res. Integrity Peer Rev., 4.
  4. Besançon, L., N. Peiffer-Smadja, C. Segalas, H. Jiang, P. Masuzzo and C. Smout et al., 2021. Open science saves lives: Lessons from the COVID-19 pandemic. BMC Med. Res. Methodol., 21.
  5. Tennant, J.P. and T. Ross-Hellauer, 2020. The limitations to our understanding of peer review. Res. Integrity Peer Rev., 5.
  6. Silverstein, P., C. Elman, A. Montoya, B. McGillivray and C.R. Pennington et al., 2024. A guide for social science journal editors on easing into open science. Res. Integrity Peer Rev., 9.
  7. van Quaquebeke, N., S. Tonidandel and G.C. Banks, 2025. Beyond efficiency: How artificial intelligence (AI) will reshape scientific inquiry and the publication process. Leadersh. Q., 36.
  8. Tabaghdehi, S.A.H. and Ö. Ayaz, 2025. AI ethics in action: A circular model for transparency, accountability and inclusivity. J. Managerial Psychol.
  9. Carobene, A., A. Padoan, F. Cabitza, G. Banfi and M. Plebani, 2024. Rising adoption of artificial intelligence in scientific publishing: Evaluating the role, risks, and ethical implications in paper drafting and review process. Clin. Chem. Lab. Med., 62: 835-843.
  10. Woods, H.B., J. Brumberg, W. Kaltenbrunner, S. Pinfield and L. Waltman, 2023. An overview of innovations in the external peer review of journal manuscripts. Wellcome Open Res., 7.
  11. Doskaliuk, B., O. Zimba, M. Yessirkepov, I. Klishch and R. Yatsyshyn, 2025. Artificial intelligence in peer review: Enhancing efficiency while preserving integrity. J. Korean Med. Sci., 40.

How to Cite this paper?


APA-7 Style
Dormer, L., Webster, A., Schultz, J. (2026). Editorial Gatekeeping in the Age of Generative AI. Trends in Scholarly Publishing, 5(1), 23-27. https://doi.org/10.21124/tsp.2026.23.27

ACS Style
Dormer, L.; Webster, A.; Schultz, J. Editorial Gatekeeping in the Age of Generative AI. Trends Schol. Pub 2026, 5, 23-27. https://doi.org/10.21124/tsp.2026.23.27

AMA Style
Dormer L, Webster A, Schultz J. Editorial Gatekeeping in the Age of Generative AI. Trends in Scholarly Publishing. 2026; 5(1): 23-27. https://doi.org/10.21124/tsp.2026.23.27

Chicago/Turabian Style
Dormer, Laura, Alaina Webster, and Jonathan Schultz. 2026. "Editorial Gatekeeping in the Age of Generative AI" Trends in Scholarly Publishing 5, no. 1: 23-27. https://doi.org/10.21124/tsp.2026.23.27