[2410.15756] Automated Proof Generation for Rust Code via Self-Evolution
Summary
This paper presents SAFE, a framework for automated proof generation in Rust code, addressing the challenge of insufficient human-written proofs through a self-evolving model that enhances verification efficiency and accuracy.
Why It Matters
The development of SAFE is significant as it automates the formal verification process, which is crucial for ensuring code correctness. This innovation can reduce the manual effort required in proof construction, making it easier for developers to maintain high-quality code, especially in safety-critical applications. The framework's ability to learn from incorrect proofs also enhances its utility in real-world scenarios.
Key Takeaways
- SAFE automates proof generation for Rust code, improving efficiency.
- The framework uses a self-evolving cycle of data synthesis and fine-tuning.
- It achieves a 52.52% accuracy rate, significantly outperforming GPT-4o.
- SAFE repurposes incorrect proofs to enhance model self-debugging capabilities.
- This advancement supports developers in maintaining code correctness with less manual effort.
Computer Science > Software Engineering arXiv:2410.15756 (cs) [Submitted on 21 Oct 2024 (v1), last revised 14 Feb 2026 (this version, v3)] Title:Automated Proof Generation for Rust Code via Self-Evolution Authors:Tianyu Chen, Shuai Lu, Shan Lu, Yeyun Gong, Chenyuan Yang, Xuheng Li, Md Rakib Hossain Misu, Hao Yu, Nan Duan, Peng Cheng, Fan Yang, Shuvendu K Lahiri, Tao Xie, Lidong Zhou View a PDF of the paper titled Automated Proof Generation for Rust Code via Self-Evolution, by Tianyu Chen and 13 other authors View PDF HTML (experimental) Abstract:Ensuring correctness is crucial for code generation. Formal verification offers a definitive assurance of correctness, but demands substantial human effort in proof construction and hence raises a pressing need for automation. The primary obstacle lies in the severe lack of data-there is much fewer proofs than code snippets for Large Language Models (LLMs) to train upon. In this paper, we introduce SAFE, a framework that overcomes the lack of human-written proofs to enable automated proof generation of Rust code. SAFE establishes a self-evolving cycle where data synthesis and fine-tuning collaborate to enhance the model capability, leveraging the definitive power of a symbolic verifier in telling correct proofs from incorrect ones. SAFE also re-purposes the large number of synthesized incorrect proofs to train the self-debugging capability of the fine-tuned models, empowering them to fix incorrect proofs based on the verifier's fee...