HyperAIHyperAI
2 months ago

Break-It-Fix-It: Unsupervised Learning for Program Repair

Yasunaga, Michihiro ; Liang, Percy
Break-It-Fix-It: Unsupervised Learning for Program Repair
Abstract

We consider repair tasks: given a critic (e.g., compiler) that assesses thequality of an input, the goal is to train a fixer that converts a bad example(e.g., code with syntax errors) into a good one (e.g., code with no syntaxerrors). Existing works create training data consisting of (bad, good) pairs bycorrupting good examples using heuristics (e.g., dropping tokens). However,fixers trained on this synthetically-generated data do not extrapolate well tothe real distribution of bad inputs. To bridge this gap, we propose a newtraining approach, Break-It-Fix-It (BIFI), which has two key ideas: (i) we usethe critic to check a fixer's output on real bad inputs and add good (fixed)outputs to the training data, and (ii) we train a breaker to generate realisticbad code from good code. Based on these ideas, we iteratively update thebreaker and the fixer while using them in conjunction to generate more paireddata. We evaluate BIFI on two code repair datasets: GitHub-Python, a newdataset we introduce where the goal is to repair Python code with AST parseerrors; and DeepFix, where the goal is to repair C code with compiler errors.BIFI outperforms existing methods, obtaining 90.5% repair accuracy onGitHub-Python (+28.5%) and 71.7% on DeepFix (+5.6%). Notably, BIFI does notrequire any labeled data; we hope it will be a strong starting point forunsupervised learning of various repair tasks.

Break-It-Fix-It: Unsupervised Learning for Program Repair | Latest Papers | HyperAI