AI-Authored Papers on the Rise, Sparking Crisis of Academic Integrity
Artificial intelligence (AI) has left its mark across the academic world, with hundreds of papers being flagged for AI-generated content. Researchers Glynn and Guillaume Cabanac from France's Toulouse University have uncovered instances where publishers quietly altered articles after publication, often without any public acknowledgment or retraction statement. These "stealth corrections" involved removing questionable AI-generated text, erasing any trace of the original alterations. For example, in one case from the journal Toxicology, an “AI regenerate response” was discovered in a published paper without any formal explanation. Glynn and Cabanac's investigation revealed over 100 such “hidden” revisions in various academic journals, many of which involved the use of AI tools. "Such practices undermine scientific trust," said Cabanac, emphasizing that transparent record-keeping is crucial for maintaining the credibility and integrity of published content. Even when proper corrections were made, they often fell short of addressing the underlying issues effectively. In 11 formally corrected papers, only one adhered to the AI disclosure policies set by the journal. In most cases, AI-generated text was either removed or marked with symbols but without the required AI usage statements included in the main body of the text. The scientific community is grappling with two fundamental concerns regarding AI usage: First, AI cannot be listed as an author. This stance is supported by major publishing bodies and journals, including the International Committee of Medical Journal Editors (ICMJE), the Committee on Publication Ethics (COPE), Nature, Science, Cell, The Lancet, and JAMA. The reasoning is straightforward: AI cannot assume the responsibilities that authors must bear, particularly in ensuring the originality, accuracy, and completeness of the research work. Second, the use of AI assistance must be properly disclosed. Many publishers require that if an author uses AI tools during the research or writing process—especially for generating text, images, or code—they must provide specific disclosure sections (such as methods, acknowledgments, or cover letters). This ensures transparency and adherence to ethical standards. However, despite these policies, their implementation varies widely. Nature requires that any large language models used be described in detail in the “methods” section, and prohibits AI-generated images except in rare cases. Science, on the other hand, allows AI to create figures and assist in writing content if clearly stated in the cover letter, but AI itself cannot be listed as an author or collaborator. Cell also forbids listing AI as an author and demands that AI contributions be noted in both the cover letter and the acknowledgment section to ensure the research's accuracy and originality. Unauthorized use of AI to generate images is strictly prohibited. These discrepancies highlight the ongoing challenges in regulating AI within the academic publishing landscape. While major institutions and journals have established guidelines, their enforcement remains inconsistent, raising questions about the effectiveness of current policies in ensuring the integrity of scientific literature.