arXiv Now Requires Peer Review for AI-Generated Papers
In October 2025, arXiv updated its moderation policy for the computer science (CS) category, requiring that review and position papers be accepted by a peer-reviewed journal or conference before being submitted. This change, while not a formal policy shift, reflects a response to an overwhelming influx of such papers—many generated or assisted by AI—over the past few years. The platform has long not officially accepted review or position papers as standard content types, but in the past, a few high-quality submissions were accepted at moderator discretion due to their scholarly value and interest to the research community. However, the rise of generative AI has drastically changed the landscape. Thousands of low-quality, AI-generated review and position papers—often little more than annotated bibliographies with minimal original analysis—have flooded the CS category, making it difficult for moderators to manage. arXiv, which relies on volunteer experts, now lacks the time and resources to assess the quality of these non-research papers. The new policy aims to preserve arXiv’s core mission: rapidly sharing original scientific research. To comply, authors must now provide documentation of successful peer review, such as a journal reference and DOI, when submitting. Papers without this will likely be rejected. If rejected for lack of peer review, authors may appeal to resubmit after the paper is accepted in a reputable venue, but they cannot re-upload the same paper without an approved appeal. The change does not apply to all arXiv categories. Papers on the societal impact of science and technology—such as those in cs.CY or physics.soc-ph—can still be submitted without peer review, as they are considered legitimate research. However, the policy may be extended to other categories if they face similar surges in AI-generated review or position papers. arXiv’s category moderators, who are subject-matter experts, have the authority to adapt their practices as needed. arXiv emphasized that the policy is not about restricting free sharing, but about ensuring quality and efficiency. Trusted peer-reviewed venues like IEEE, Annual Reviews, and Computing Surveys already curate high-value review and position papers, often on critical topics like AI ethics, privacy, and safety. By relying on these established systems, arXiv can still share valuable content while focusing its limited moderation resources on original research. The update also clarifies that conference workshop reviews, which are often less rigorous, do not meet the required standard. This prevents the use of low-barrier venues to circumvent the new rule. In summary, arXiv’s move is a strategic response to the challenges posed by AI-generated content. It aims to protect the integrity of the platform, help readers find high-quality, expert-curated work, and free up moderators to focus on advancing scientific discovery. While the change is currently limited to CS, it may set a precedent for other arXiv categories if similar issues arise. The platform remains committed to open access, but with a renewed focus on quality and sustainability in the age of generative AI.
