DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data

Proof assistants like Lean have revolutionized mathematical proofverification, ensuring high accuracy and reliability. Although large languagemodels (LLMs) show promise in mathematical reasoning, their advancement informal theorem proving is hindered by a lack of training data. To address thisissue, we introduce an approach to generate extensive Lean 4 proof data derivedfrom high-school and undergraduate-level mathematical competition problems.This approach involves translating natural language problems into formalstatements, filtering out low-quality statements, and generating proofs tocreate synthetic data. After fine-tuning the DeepSeekMath 7B model on thissynthetic dataset, which comprises 8 million formal statements with proofs, ourmodel achieved whole-proof generation accuracies of 46.3% with 64 samples and52% cumulatively on the Lean 4 miniF2F test, surpassing the baseline GPT-4 at23.0% with 64 samples and a tree search reinforcement learning method at 41.0%.Additionally, our model successfully proved 5 out of 148 problems in the Lean 4Formalized International Mathematical Olympiad (FIMO) benchmark, while GPT-4failed to prove any. These results demonstrate the potential of leveraginglarge-scale synthetic data to enhance theorem-proving capabilities in LLMs.Both the synthetic dataset and the model will be made available to facilitatefurther research in this promising field.