HyperAIHyperAI
2 months ago

BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions

Terry Yue Zhuo, Minh Chien Vu, Jenny Chim, Han Hu, Wenhao Yu, Ratnadira Widyasari, Imam Nur Bani Yusuf, Haolan Zhan, Junda He, Indraneil Paul, Simon Brunner, Chen Gong, Thong Hoang, Armel Randy Zebaze, Xiaoheng Hong, Wen-Ding Li, Jean Kaddour, Ming Xu, Zhihan Zhang, Prateek Yadav, Naman Jain, Alex Gu, Zhoujun Cheng, Jiawei Liu, Qian Liu, Zijian Wang, David Lo, Binyuan Hui, Niklas Muennighoff, Daniel Fried, Xiaoning Du, Harm de Vries, Leandro Von Werra
BigCodeBench: Benchmarking Code Generation with Diverse Function Calls
  and Complex Instructions
Abstract

Automated software engineering has been greatly empowered by the recentadvances in Large Language Models (LLMs) for programming. While currentbenchmarks have shown that LLMs can perform various software engineering taskslike human developers, the majority of their evaluations are limited to shortand self-contained algorithmic tasks. Solving challenging and practicalprogramming tasks requires the capability of utilizing diverse function callsas tools to efficiently implement functionalities like data analysis and webdevelopment. In addition, using multiple tools to solve a task needscompositional reasoning by accurately understanding complex instructions.Fulfilling both of these characteristics can pose a great challenge for LLMs.To assess how well LLMs can solve challenging and practical programming tasks,we introduce Bench, a benchmark that challenges LLMs to invoke multiplefunction calls as tools from 139 libraries and 7 domains for 1,140 fine-grainedprogramming tasks. To evaluate LLMs rigorously, each programming taskencompasses 5.6 test cases with an average branch coverage of 99%. In addition,we propose a natural-language-oriented variant of Bench, Benchi, thatautomatically transforms the original docstrings into short instructions onlywith essential information. Our extensive evaluation of 60 LLMs shows that LLMsare not yet capable of following complex instructions to use function callsprecisely, with scores up to 60%, significantly lower than the humanperformance of 97%. The results underscore the need for further advancements inthis area.

BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions | Latest Papers | HyperAI