A Survey on Large Language Model Benchmarks

In recent years, with the rapid development of the depth and breadth of largelanguage models' capabilities, various corresponding evaluation benchmarks havebeen emerging in increasing numbers. As a quantitative assessment tool formodel performance, benchmarks are not only a core means to measure modelcapabilities but also a key element in guiding the direction of modeldevelopment and promoting technological innovation. We systematically reviewthe current status and development of large language model benchmarks for thefirst time, categorizing 283 representative benchmarks into three categories:general capabilities, domain-specific, and target-specific. General capabilitybenchmarks cover aspects such as core linguistics, knowledge, and reasoning;domain-specific benchmarks focus on fields like natural sciences, humanitiesand social sciences, and engineering technology; target-specific benchmarks payattention to risks, reliability, agents, etc. We point out that currentbenchmarks have problems such as inflated scores caused by data contamination,unfair evaluation due to cultural and linguistic biases, and lack of evaluationon process credibility and dynamic environments, and provide a referable designparadigm for future benchmark innovation.