HyperAIHyperAI
8 days ago

Inverse IFEval: Can LLMs Unlearn Stubborn Training Conventions to Follow Real Instructions?

Qinyan Zhang, Xinping Lei, Ruijie Miao, Yu Fu, Haojie Fan, Le Chang, Jiafan Hou, Dingling Zhang, Zhongfei Hou, Ziqiang Yang, Changxin Pu, Fei Hu, Jingkai Liu, Mengyun Liu, Yang Liu, Xiang Gao, Jiaheng Liu, Tong Yang, Zaiyuan Wang, Ge Zhang, Wenhao Huang
Inverse IFEval: Can LLMs Unlearn Stubborn Training Conventions to Follow
  Real Instructions?
Abstract

Large Language Models (LLMs) achieve strong performance on diverse tasks butoften exhibit cognitive inertia, struggling to follow instructions thatconflict with the standardized patterns learned during supervised fine-tuning(SFT). To evaluate this limitation, we propose Inverse IFEval, a benchmark thatmeasures models Counter-intuitive Abilitytheir capacity to overridetraining-induced biases and comply with adversarial instructions. InverseIFEval introduces eight types of such challenges, including QuestionCorrection, Intentional Textual Flaws, Code without Comments, andCounterfactual Answering. Using a human-in-the-loop pipeline, we construct adataset of 1012 high-quality Chinese and English questions across 23 domains,evaluated under an optimized LLM-as-a-Judge framework. Experiments on existingleading LLMs demonstrate the necessity of our proposed Inverse IFEvalbenchmark. Our findings emphasize that future alignment efforts should not onlypursue fluency and factual correctness but also account for adaptability underunconventional contexts. We hope that Inverse IFEval serves as both adiagnostic tool and a foundation for developing methods that mitigate cognitiveinertia, reduce overfitting to narrow patterns, and ultimately enhance theinstruction-following reliability of LLMs in diverse and unpredictablereal-world scenarios.