HyperAIHyperAI

Command Palette

Search for a command to run...

AI Conference Trials Self-Ranking Papers to Tackle Submission Flood and Predict Impact

The growing flood of research submissions to top artificial intelligence conferences has sparked a search for smarter ways to evaluate quality and impact. With some events seeing submission numbers rise more than tenfold over the past decade, sorting through the volume has become a major challenge. Buxin Su, a mathematician at the University of Pennsylvania in Philadelphia, argues that the issue goes beyond sheer volume—many authors submit multiple papers, making it harder for reviewers to identify the most promising work. In a study posted on the preprint server arXiv in October, Su and his colleagues introduced a novel approach: requiring authors who submit more than one paper to rank their own submissions by quality and potential impact. These self-rankings are then calibrated against the assessments of peer reviewers, who remain unaware of the self-rankings. The method was tested on 2,592 papers submitted by 1,342 researchers to the 2023 International Conference on Machine Learning (ICML), one of the most prestigious AI events, held in Honolulu, Hawaii. Sixteen months after the conference, the team evaluated each paper’s real-world influence by tracking citation counts and comparing them to the calibrated peer-review scores. The results showed a strong correlation: papers ranked highest by their authors received, on average, twice as many citations as those ranked lowest. “The authors’ rankings are a very good predictor of long-term impact,” Su said. “The calibrated scores better reflect the true quality.” The system will be formally adopted at ICML 2026, set to take place in Seoul, South Korea. Su, a member of the conference’s integrity committee, believes the method could be useful across research fields, but is especially suited to AI conferences due to the high number of multiple submissions and the rising volume of AI-generated papers. Nihar Shah, a computer scientist at Carnegie Mellon University, calls the idea “really novel and really cool,” but questions whether authors can truly judge their own work’s impact more accurately than reviewers. He suggests the observed correlation might stem from the study’s methodology rather than an inherent advantage of self-assessment. Still, Shah acknowledges the urgent need for solutions to the submission surge and welcomes any effort to improve evaluation processes. Emma Pierson, a computer scientist at the University of California, Berkeley, sees value in the approach. “You know which papers are your ‘baby’, which ones you really love,” she said. “I think the author’s own self-ranking would be one valuable source of input if you can get them to honestly provide it.” Both experts caution that the system could be vulnerable to manipulation—researchers might inflate the rankings of weaker papers to counterbalance negative reviewer feedback. Nonetheless, the idea represents a promising step toward addressing the growing complexity of academic evaluation in an era of explosive research growth.

Related Links