[2602.06098] A Theoretical Analysis of Test-Driven LLM Code Generation
About this article
Abstract page for arXiv paper 2602.06098: A Theoretical Analysis of Test-Driven LLM Code Generation
Computer Science > Software Engineering arXiv:2602.06098 (cs) [Submitted on 5 Feb 2026 (v1), last revised 29 Mar 2026 (this version, v2)] Title:A Theoretical Analysis of Test-Driven LLM Code Generation Authors:Nicolas Menet, Michael Hersche, Andreas Krause, Abbas Rahimi View a PDF of the paper titled A Theoretical Analysis of Test-Driven LLM Code Generation, by Nicolas Menet and 3 other authors View PDF HTML (experimental) Abstract:Coding assistants are increasingly utilized in test-driven software development, yet the theoretical mechanisms behind their environment-interaction strategies remain underexplored. We provide a probabilistic framework for two dominant paradigms: code selection after generation using the execution environment, and code generation conditioned on environment feedback. First, we formalize several well-established selection heuristics as environment-aware estimators of code correctness. We theoretically prove that estimators based on fuzzy functional similarity add an inductive bias and strictly dominate estimators based on functional equivalence in terms of signal-to-noise ratio. Second, we frame backprompting as an in-context approximation of Thompson sampling. We derive a novel regret bound for reward functions with unobservable components, theoretically explaining why the effectiveness of backprompting is limited by the ambiguity of the informal task description (an irreducible regret). Using three state-of-the-art open weight models, we corrobo...