AI-Generated Letters Surge in Scientific Journals, Researchers Warn of Synthetic Noise and Misinformation
Just two days after Carlos Chaccour and Matthew Rudd published a paper in The New England Journal of Medicine on malaria control, an editor shared a letter with them raising serious concerns. Though well-written, the letter cited Chaccour and Rudd’s prior work in ways that contradicted their findings. Suspecting artificial intelligence had fabricated references, the researchers launched an investigation into over 730,000 letters to scientific journals published over the past two decades. Their findings, detailed in a preprint on Research Square, reveal a sharp rise in a small group of so-called “prolific debutante” authors—newcomers who rapidly produced large numbers of letters starting in 2023. These authors, they believe, are likely using AI tools like ChatGPT to generate content at scale. From 2023 to 2025, nearly 8,000 authors moved from the bottom to the top 5% of letter writers. Despite making up only 3% of active authors during that period, they accounted for 22% of all letters published—around 23,000 in total—across 1,930 journals, including 175 in The Lancet and 122 in NEJM. This surge comes amid stable overall letter-writing rates: the average number of letters per author rose only slightly from 1.16 to 1.34 over 20 years, and the number of journals accepting letters has remained flat since 2022. The most striking case involved a physician from Qatar who published no letters in 2024 but over 80 in 2025. His letters spanned 58 different topics—an improbable range for a single researcher. Chaccour’s team tested 81 of these letters using the Pangram AI detector. The average score was 80 out of 100, indicating a high likelihood of AI use. In contrast, 74 letters from a prolific writer in the 2000s—before ChatGPT—scored zero. Letters are particularly vulnerable to AI abuse because they are short, often lack peer review, and can be published quickly. They serve as a venue for post-publication commentary but are also exploited to inflate publication counts, a practice highlighted in a 2023 investigation by Retraction Watch and Science. While some AI use may help non-native English speakers improve writing, experts warn of a flood of low-quality, repetitive content. Seth Leopold, editor-in-chief of Clinical Orthopaedics and Related Research, notes that many AI-generated letters have identical structures, awkward phrasing, and fail to raise meaningful critiques—often pointing out limitations already acknowledged in the original paper. To combat this, Leopold’s journal now requires authors of flagged submissions to provide verifiable quotes from cited sources. Though it adds workload, he stresses that preserving credibility is essential. “If we lose the confidence of our readers, we’ve lost everything,” he says. Chaccour warns that AI-generated letters risk drowning out genuine scientific discourse. “It took me six years and $25 million to publish that NEJM paper,” he says. “It may have taken the Qatari author minutes to write the letter about it.” The imbalance threatens to erode trust in scholarly communication. Without vigilance, the flood of synthetic content could undermine the integrity of scientific journals and the research they aim to support.
