[2602.18307] VeriSoftBench: Repository-Scale Formal Verification Benchmarks for Lean

[2602.18307] VeriSoftBench: Repository-Scale Formal Verification Benchmarks for Lean

arXiv - Machine Learning 3 min read Article

Summary

The paper introduces VeriSoftBench, a benchmark for formal verification in Lean, highlighting its limitations and performance insights from LLMs in repository contexts.

Why It Matters

VeriSoftBench addresses the gap in formal verification benchmarks, focusing on real-world software verification challenges. By evaluating LLMs in this context, it provides crucial insights for improving automated proof systems, which are essential for software reliability and correctness.

Key Takeaways

  • VeriSoftBench includes 500 Lean 4 proof obligations from open-source projects.
  • LLMs trained on Mathlib-style mathematics perform poorly in repository-centric settings.
  • Proof success correlates with the complexity of repository dependencies.
  • Curated context improves LLM performance but highlights the need for further advancements.
  • The benchmark and evaluation suite are publicly available for further research.

Computer Science > Software Engineering arXiv:2602.18307 (cs) [Submitted on 20 Feb 2026] Title:VeriSoftBench: Repository-Scale Formal Verification Benchmarks for Lean Authors:Yutong Xin, Qiaochu Chen, Greg Durrett, Işil Dillig View a PDF of the paper titled VeriSoftBench: Repository-Scale Formal Verification Benchmarks for Lean, by Yutong Xin and 3 other authors View PDF HTML (experimental) Abstract:Large language models have achieved striking results in interactive theorem proving, particularly in Lean. However, most benchmarks for LLM-based proof automation are drawn from mathematics in the Mathlib ecosystem, whereas proofs in software verification are developed inside definition-rich codebases with substantial project-specific libraries. We introduce VeriSoftBench, a benchmark of 500 Lean 4 proof obligations drawn from open-source formal-methods developments and packaged to preserve realistic repository context and cross-file dependencies. Our evaluation of frontier LLMs and specialized provers yields three observations. First, provers tuned for Mathlib-style mathematics transfer poorly to this repository-centric setting. Second, success is strongly correlated with transitive repository dependence: tasks whose proofs draw on large, multi-hop dependency closures are less likely to be solved. Third, providing curated context restricted to a proof's dependency closure improves performance relative to exposing the full repository, but nevertheless leaves substantial room fo...

Related Articles

Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
Llms

built an open source CLI that auto generates AI setup files for your projects just hit 150 stars

hey everyone, been working on this side project called ai-setup and just hit a milestone i wanted to share 150 github stars, 90 PRs merge...

Reddit - Artificial Intelligence · 1 min ·
Llms

built an open source tool that auto generates AI context files for any codebase, 150 stars in

one of the most tedious parts of working with AI coding tools is having to manually write context files every single time. CLAUDE.md, .cu...

Reddit - Artificial Intelligence · 1 min ·
Find out what’s new in the Gemini app in March's Gemini Drop.
Llms

Find out what’s new in the Gemini app in March's Gemini Drop.

Gemini Drops is our regular monthly update on how to get the most out of the Gemini app.

AI Tools & Products · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime