INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair

This paper introduces INTERVENOR (INTERactiVE chaiN Of Repair), a systemdesigned to emulate the interactive code repair processes observed in humans,encompassing both code diagnosis and code repair. INTERVENOR prompts LargeLanguage Models (LLMs) to play distinct roles during the code repair process,functioning as both a Code Learner and a Code Teacher. Specifically, the CodeLearner is tasked with adhering to instructions to generate or repair code,while the Code Teacher is responsible for crafting a Chain-of-Repair (CoR) toserve as guidance for the Code Learner. During generating the CoR, the CodeTeacher needs to check the generated codes from Code Learner and reassess howto address code bugs based on error feedback received from compilers.Experimental results demonstrate that INTERVENOR surpasses baseline models,exhibiting improvements of approximately 18% and 4.3% over GPT-3.5 in codegeneration and code translation tasks, respectively. Our further analyses showthat CoR is effective to illuminate the reasons behind bugs and outlinesolution plans in natural language. With the feedback of code compilers,INTERVENOR can accurately identify syntax errors and assertion errors andprovide precise instructions to repair codes. All data and codes are availableat https://github.com/NEUIR/INTERVENOR