Perspective | Open Access

Reimagining Peer Review in Scholarly Publishing: From Blind to Open in the Digital Age

    Esma Şenel LiveDNA ORCID

    Çanakkale Onsekiz Mart University, School of Graduate Studies, Çanakkale, Türkiye


Peer review has served as the cornerstone of scholarly validation for centuries, and the academic publishing ecosystem is currently witnessing a critical transformation in how peer review is organized, conducted, and evaluated. Traditional blind peer review models were historically designed to ensure objectivity, yet they struggle to meet the transparency demands, reviewer recognition, and accountability requirements in the digital age. As part of this shift, alternative models such as Open Peer Review, collaborative evaluation, and post-publication commentary are becoming more visible and leading to changes in editorial policy, journal governance, and research integrity practices. These models promote openness and inclusivity, but they also raise questions about consistency, workload distribution, and the equitable treatment of authors and reviewers across disciplines. This perspective article assesses the implications of these evolving systems for scholarly publishing workflows, particularly in terms of reviewer accountability, policy design, and the need for clear standards to manage both human and AI-assisted evaluation. It also emphasizes the importance of reviewer education in sustaining fair, ethical, and high-quality assessment practices.

Copyright © 2026 Esma Şenel. This is an open-access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 

INTRODUCTION

Peer review is a foundational component of the scientific enterprise, and it serves as the primary mechanism for validation and quality control in scholarly communication1,2. This process has been described as a “black box” and it has largely relied on blinded models in which the identities of reviewers and authors are not fully visible3,4. Although traditional anonymized review aims to ensure impartiality, it has been challenged by increased criticism due to the lack of transparency, inefficiency, and high costs. One major problem is that the process remains inefficient. Critics point out that the “waterfall” structure of traditional submission wastes valuable time and reviewer effort because each journal requires a completely new review cycle when a manuscript is rejected5. Furthermore, empirical evidence indicates that double-blind models do not fully eliminate biases related to gender, geography, or institutional affiliation6.

As the academic community shifts toward the principles of Open Science, there is a growing demand to make scientific practices more transparent, collaborative, and accessible2. The expansion of digital infrastructure and advances in artificial intelligence have given rise to new evaluation models that challenge existing norms7,8. Within this evolving landscape, Open Peer Review (OPR) has emerged as a prominent approach for addressing the limitations of traditional review practices3. Rethinking peer review in the digital age, therefore, depends on a clear understanding of these diverse traits, together with the technological tools that can support them.

However, growing pressure on the peer review system has created new challenges for publishers. Reviewer fatigue has been increasing due to high submission volumes and excessive workload, making it harder to secure timely and thorough evaluations5. Trust in the process has also declined because of ethical issues such as fake reviewer accounts and undisclosed conflicts of interest3. Moreover, the rise of AI-assisted writing and paper mills has introduced new integrity threats that are difficult to detect using traditional screening methods7. These developments have shifted the debate around peer review from a methodological concern to a core publishing policy priority, demanding concrete reforms to ensure the credibility, fairness, and long-term sustainability of scholarly communication. In this context, this perspective highlights how evolving peer review models can strengthen transparency, accountability, and trust in scholarly publishing in the digital era. It identifies key policy, technological, and educational gaps that limit the effectiveness of both traditional and open review systems. The insights offered aim to inform journal editors, publishers, and research institutions in designing fair, sustainable peer review frameworks that responsibly integrate human expertise with AI-assisted evaluation while safeguarding research integrity.

EMERGING PEER REVIEW MODELS

The shift toward digitization has transformed academic publishing from a largely static, print-based process into a dynamic and interactive scholarly network. This evolution has given rise to various peer review models that offer different degrees of openness. As Ross-Hellauer notes, Open Peer Review (OPR) is not a single model but an “umbrella term” that includes practices such as the disclosure of reviewer identities and the publication of review reports3. Experiences from publishers such as BMJ, Copernicus, and F1000Research suggest that increasing transparency in peer review may reduce unethical practices such as ghostwriting and undisclosed conflicts of interest. By making reviewer identities and reports public, these platforms seek to promote accountability. However, wider adoption creates operational challenges for publishers because scalability depends on reviewer willingness, platform capacity, and clear integration into editorial workflows and timelines3,4,9.

In addition to transparency, there is a movement toward collaborative review models that have emerged to eliminate the traditional division between author and reviewer. Platforms such as eLife and Frontiers facilitate direct interaction between editors, reviewers, and authors, allowing them to discuss a manuscript in real time and reach a consensus. As noted by Sarwar et al.7, this interactive approach transforms peer review from a hostile interrogation into a collective effort aimed at improving the manuscript7. Furthermore, post-publication peer review platforms like PubPeer and preprint servers such as bioRxiv allow for continuous assessment by the scientific community. The feasibility of these models is heavily dependent on the available technological infrastructure. Dyke points out that the rapid expansion of AI tools brings both new opportunities and ethical challenges, and this development positions technology as an active participant in the peer review process10.

To clarify the distinctions between these approaches, Table 1 compares the core features of established conventions versus these emerging evaluation frameworks.

Table 1: Comparison of traditional and emerging peer review models
Feature Traditional peer review Emerging models
Anonymity Single or double blind (identities hidden) Open Identities (identities disclosed)
Transparency Low (reports are confidential) High (reports published with the article)
Interaction Minimal (mediated by editor) Collaborative (direct interaction possible)
Timing Pre-publication only Pre-publication, post-publication, or continuous
Reviewer credit Internal (editor knows only) Public (citable, verified via reviewer credits, etc.)
Primary risk Accountability gap, hidden bias Potential retaliation, self-censorship
Role of AI Limited/Skeptical use Integrated for screening and matching10

While traditional models protect reviewer confidentiality to encourage honest feedback, emerging frameworks align with the broader goals of Open Science by aiming to make the technical validity of research more transparent and easier to verify. It is important to note that these new models often operate along a spectrum rather than as fixed choices, and some journals, for example, may adopt open reports while still allowing reviewers to maintain anonymity through a hybrid approach to balance transparency with reviewer protection6. Moreover, innovations such as post-publication commenting allow for a “living” document that evolves through continuous community feedback, challenging the traditional notion of a static “version of record”5. As a result, these developments illustrate that peer review is transforming from a rigid gatekeeping process into a dynamic system better suited for the digital era.

BENEFITS AND OPPORTUNITIES

The transition to open peer review models offers substantial opportunities to enhance the quality of scientific communication. One of the most significant benefits is the increase in transparency and accountability. When journals publish reviewer reports alongside articles, they turn the traditional “black box” of evaluation into a transparent process that helps readers better understand the research and assess the quality of the review3,6. As highlighted by the editors of Nature Human Behaviour, transparency facilitates a more robust quality control mechanism by allowing readers to evaluate the decision-making process11. Moreover, studies show that when reviews are made public, the quality and constructiveness of reports can improve, as reviewers tend to be more careful and thorough when they know their evaluations will be visible to the broader community4,12. A study by Holst et al.6, suggests that open processes can encourage reviewers to act more responsibly, and this may help reduce unconscious biases6. While the disclosure of reviewer identities is still debated, supporters argue that signed reviews encourage more respectful communication, reduce conflicts of interest, and hold reviewers accountable for their decisions3,4.

Moreover, these models facilitate the recognition of reviewer contributions. While reviewing remains invisible labor in the traditional system, open models allow reports to be assigned DOIs and verified through platforms like Reviewer Credits. This transforms the task into a citable contribution for academic CVs. Open reports also serve as valuable educational resources. Early-career researchers can utilize these documents to understand how to conduct high-quality reviews, which helps transform the peer review process into a form of open mentorship. Finally, innovations like open interaction and community participation promote direct dialogue between authors and reviewers, shifting peer review from a static judgment into a collaborative process that improves publications through ongoing consensus-building3,5.

CHALLENGES AND RISKS

The implementation of open peer review is not without risks. One of the main concerns is the potential for retaliation against early-career researchers, who may hesitate to criticize the work of senior academics if their identity is disclosed3,6. This situation could lead to self-censorship or the refusal of review invitations, which would compromise the objectivity of the process4,7. Consequently, studies suggest that requiring reviewers to reveal their identities can lead to higher refusal rates or longer review times, as many scholars still prefer the anonymity provided by traditional blinded systems, and survey data also shows that a significant number of researchers are concerned that openness could reduce honesty in evaluations13.

There is also a risk that biases may be reproduced rather than eliminated within open models. Frameworks that rely on open participation or crowdsourcing often struggle to attract enough engagement, which may result in self-selection bias since high-profile papers tend to receive attention while others are ignored3,5. This dynamic creates a risk where prominent researchers receive more credit than deserved, while the work of scholars from less-advantaged regions or underrepresented groups is overlooked5. As a result, removing blinding can unintentionally bring back social biases related to gender, nationality, or institutional prestige, which an anonymous review is designed to reduce3,5.

Beyond technological concerns, the move toward open peer review also faces persistent social, cultural, and structural barriers. In some academic cultures, strong social hierarchies and respect for seniority make it difficult to provide critical feedback in open review systems because of the fact that reviewers may feel culturally restrained from challenging senior researchers1. The adoption of OPR models is also limited by a lack of technical support, as many existing manuscript-handling systems do not yet provide the necessary software infrastructure to manage transparent or interactive workflows efficiently3,9.

IMPLICATIONS FOR REVIEWER EDUCATION

As peer review continues to evolve and AI tools become more common, there is a growing need for a clear and structured approach to reviewer education. Peer review should no longer be seen as an informal skill that develops only through experience, and initiatives like the Peer Review Lab and training modules from Reviewer Credits play a key role in addressing this need. Although Hames highlighted the need for reviewer training years ago, current expectations now include digital literacy and stronger ethical awareness14. Educational frameworks must now incorporate structured mentorship programs to ensure that the skills developed through peer review can be applied across a range of academic roles7. In addition, the transition toward transparent models allows early-career researchers to use published review reports as a pedagogical tool that helps them learn the appropriate tone, length, and formulation of rigorous criticism3,5,6. These efforts are essential to prepare reviewers who can meet the evolving demands of a more transparent, digital, and responsible peer review ecosystem.

Reviewers are expected to handle conflicts of interest transparently and give constructive feedback that supports improvement without being disrespectful1,14. However, reviewers cannot be expected to navigate these complex standards alone. In this context, editorial offices and publishers also carry responsibility because they set expectations and clarify what counts as an acceptable review1,2. They can strengthen reviewer capacity by providing short guidance documents, structured templates, and examples of high-quality reports, and they can support consistency by offering brief feedback to reviewers after editorial decisions when appropriate1. This support helps journals protect research integrity and reduce review delays that can result from unclear expectations and uneven reviewing standards5. These responsibilities have become even more important as digital tools reshape review practices. As Sayab et al.8 emphasize, it is now essential for reviewers to be AI-literate so that they can distinguish between human-written and AI-generated content, identify fabricated data produced by AI, and use AI tools within ethical limits. However, education must emphasize that while automation can manage routine administrative tasks, it cannot replace human judgment in areas that require critical thinking and an understanding of complex research details7,10. Therefore, the reviewer of the future should be a digitally competent scholar who combines strong subject knowledge with ethical training.

FUTURE DIRECTIONS

The future of academic publishing will likely not depend on either a fully closed system or uncontrolled openness. Instead, the solution is to be found in the adoption of context-sensitive hybrid models that can be adapted to the specific needs of different disciplines. The vision of a reimagined peer review system suggests a balance where technology enhances human judgment rather than replacing it. Achieving this balance also depends on developing AI literacy, so that researchers can use automated tools responsibly without losing the essential human values of critical thinking and ethical judgment7,8. In addition, future systems must explicitly address the knowledge divide by ensuring that technological advancements do not exclude researchers in resource-limited regions who may lack advanced digital infrastructure2.

As peer review continues to evolve, institutions and funders need to support this shift by formally recognizing peer review contributions in academic evaluation and career advancement processes. This recognition should be accompanied by changes in incentive structures to encourage high-quality participation and reduce reviewer fatigue in an increasingly crowded publishing environment5. Consequently, moving from blind to open review reflects a cultural shift that promotes greater trust and collaboration. This transformation should be guided by international cooperation and evidence-based standards that prioritise transparency as a reliable indicator of technical validity and rigour2,6. Success in this transformation depends on finding the right balance between the rigor of traditional practices and the potential of innovation, so that peer review continues to protect scientific integrity.

CONCLUSION

To translate these priorities into practice, journals should adopt clear peer review policies that define the level of transparency, explain whether reviewer reports will be published, and clarify whether reviewer names will be shared. These policies should also set minimum expectations for review quality, tone, and evidence use, and they should explain how editors will manage conflicts of interest, appeals, and integrity concerns. Publishers should support these efforts by investing in manuscript systems that make transparent workflows scalable and easy to manage, including reliable version tracking, secure record-keeping, and practical tools that help editors manage timelines and distribute workload fairly. Journals should also address reviewer fatigue through concrete process changes, for example, by improving reviewer matching, avoiding unnecessary rounds of review, and using transferable reviews when the model fits the discipline.

At the same time, journals and publishers should treat reviewer capacity building as a planned investment rather than an informal expectation. They can offer structured training that covers review standards, research integrity checks, and the responsible use of AI, and they can strengthen learning through short modules, mentorship pathways, and practice tasks that reflect real editorial cases. Reviewer recognition and certification are equally important because they make review work visible and valued, and they can motivate careful, constructive participation by linking competence to professional credit. At the policy level, institutions and funders should recognize peer review as scholarly labour and align incentives with quality rather than volume, since sustainable reform depends on reviewer time, competence, and trust. Consequently, these steps provide a realistic path to improve accountability, efficiency, and fairness while keeping peer review focused on its core purpose, which is to protect research integrity.

REFERENCES

  1. Larivière, V., C.R. Sugimoto and P. Bergeron, 2013. In their own image? A comparison of doctoral students' and faculty members' referencing behavior. J. Am. Soc. Inf. Sci. Technol., 64: 1045-1054.
  2. UNESCO, 2021. UNESCO Recommendation on Open Science. UNESCO Publishing, Paris, France, Pages: 34.
  3. Ross-Hellauer, T., 2017. What is open peer review? A systematic review. F1000Research, 6.
  4. Wolfram, D., P. Wang, A. Hembree and H. Park, 2020. Open peer review: Promoting transparency in open science. Scientometrics, 125: 1033-1051.
  5. Tennant, J.P., J.M. Dugan, D. Graziotin, D.C. Jacques and F. Waldner et al., 2017. A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research, 6.
  6. Holst, F., K. Eggleton and S. Harris, 2022. Transparency versus anonymity: Which is better to eliminate bias in peer review? Insights, 35.
  7. Sarwar, M., M. Machado, J. Robens, G. Dyke and M. Sayab, 2025. Bridging tradition and technology: Expert insights on the future of innovation in peer review. Sci. Editor, 48.
  8. Sayab, M., R. Ghosh, G. Dyke and M. Machado, 2025. Rethinking peer review in the AI era: Announcing the theme for peer review week 2025. Editorial Office News, 18.
  9. Ford, E., 2013. Defining and characterizing open peer review: A review of the literature. J. Scholarly Publ., 44: 311-326.
  10. Dyke, G., 2024. Bonfire of the (ethical) vanities and the “AI tool explosion”: opportunities and challenges of the impact of artificial intelligence on research. Sci. Ed., 11: 155-159.
  11. Nature Human Behaviour, 2019. Transparency in peer review. Nat. Hum. Behav., 3: 1237-1237.
  12. Kowalczuk, M.K., F. Dudbridge, S. Nanda, S.L. Harriman, J. Patel and E.C. Moylan, 2015. Retrospective analysis of the quality of reports by author-suggested and non-author-suggested reviewers in journals operating on open or single-blind peer review models. BMJ Open, 5.
  13. Ross-Hellauer, T., A. Deppe and B. Schmidt, 2017. Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers. PLoS ONE, 12.
  14. Hames, I., 2007. Peer Review and Manuscript Management in Scientific Journals: Guidelines for Good Practice. Wiley, New Jersey, United States, ISBN:9780470750803, Pages: 293.

How to Cite this paper?


APA-7 Style
Şenel, E. (2026). Reimagining Peer Review in Scholarly Publishing: From Blind to Open in the Digital Age. Trends in Scholarly Publishing, 5(1), 43-48. https://doi.org/10.21124/tsp.2026.43.48

ACS Style
Şenel, E. Reimagining Peer Review in Scholarly Publishing: From Blind to Open in the Digital Age. Trends Schol. Pub 2026, 5, 43-48. https://doi.org/10.21124/tsp.2026.43.48

AMA Style
Şenel E. Reimagining Peer Review in Scholarly Publishing: From Blind to Open in the Digital Age. Trends in Scholarly Publishing. 2026; 5(1): 43-48. https://doi.org/10.21124/tsp.2026.43.48

Chicago/Turabian Style
Şenel, Esma. 2026. "Reimagining Peer Review in Scholarly Publishing: From Blind to Open in the Digital Age" Trends in Scholarly Publishing 5, no. 1: 43-48. https://doi.org/10.21124/tsp.2026.43.48