Abstract
The rapid adoption of artificial intelligence (AI) chatbots in academic peer review (PR) has sparked both excitement and concern, raising critical questions about the future of scientific integrity. This paper examined how AI tools, particularly Chat Generative Pre-Trained Transformer, are reshaping scientific PR. As these tools become more prevalent in academic evaluation, they bring both opportunities and challenges to scholarly communication. AI assistance offers valuable benefits: it can speed up review processes, help non-native English speakers express their ideas clearly, and improve overall text readability. However, our research revealed growing concerns about whether AI-assisted reviews can maintain the depth and authenticity that quality PR demands. We explored how the scientific community can balance these technological capabilities with the need for thorough, expert-driven evaluation. The potential for AI to introduce bias, overlook novel contributions, and promote uniformity in feedback threatens the nuanced insights traditionally offered by human experts. In this review we critically evaluated the ethical implications of AI use in PR, focusing on three main issues: (i) The risks associated with over-reliance on AI tools by reviewers, including diminished engagement and critical thinking; (ii) The need for transparency and disclosure when AI tools assist in review generation; and (iii) The creation of ethical guidelines to balance AI’s capabilities with human expertise. With AI’s increasing role in academia, the academic community shall address these ethical challenges by establishing robust policies that ensure AI complements rather than replacing human judgment. We call for immediate action to develop clear guidelines governing AI’s role in academic evaluation, promoting a transparent and responsible use of AI that upholds the integrity of PR, which is essential for the advancement of scientific knowledge.