[2602.21997] Enhancing LLM-Based Test Generation by Eliminating Covered Code

[2602.21997] Enhancing LLM-Based Test Generation by Eliminating Covered Code

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a novel method for enhancing LLM-based unit test generation by eliminating covered code, addressing challenges in testing complex software methods.

Why It Matters

Automated test generation is crucial for software quality assurance. This research enhances the capabilities of LLMs in generating effective unit tests, potentially improving software reliability and reducing testing time, which is vital in today's fast-paced development environments.

Key Takeaways

  • Proposes a scalable LLM-based unit test generation method.
  • Utilizes context retrieval and iterative test generation to improve coverage.
  • Demonstrates effectiveness through evaluations on open-source projects.

Computer Science > Software Engineering arXiv:2602.21997 (cs) [Submitted on 25 Feb 2026] Title:Enhancing LLM-Based Test Generation by Eliminating Covered Code Authors:WeiZhe Xu, Mengyu Liu, Fanxin Kong View a PDF of the paper titled Enhancing LLM-Based Test Generation by Eliminating Covered Code, by WeiZhe Xu and 2 other authors View PDF HTML (experimental) Abstract:Automated test generation is essential for software quality assurance, with coverage rate serving as a key metric to ensure thorough testing. Recent advancements in Large Language Models (LLMs) have shown promise in improving test generation, particularly in achieving higher coverage. However, while existing LLM-based test generation solutions perform well on small, isolated code snippets, they struggle when applied to complex methods under test. To address these issues, we propose a scalable LLM-based unit test generation method. Our approach consists of two key steps. The first step is context information retrieval, which uses both LLMs and static analysis to gather relevant contextual information associated with the complex methods under test. The second step, iterative test generation with code elimination, repeatedly generates unit tests for the code slice, tracks the achieved coverage, and selectively removes code segments that have already been covered. This process simplifies the testing task and mitigates issues arising from token limits or reduced reasoning effectiveness associated with excessively lo...

Related Articles

[2603.17839] How do LLMs Compute Verbal Confidence
Llms

[2603.17839] How do LLMs Compute Verbal Confidence

Abstract page for arXiv paper 2603.17839: How do LLMs Compute Verbal Confidence

arXiv - AI · 4 min ·
[2603.15970] 100x Cost & Latency Reduction: Performance Analysis of AI Query Approximation using Lightweight Proxy Models
Llms

[2603.15970] 100x Cost & Latency Reduction: Performance Analysis of AI Query Approximation using Lightweight Proxy Models

Abstract page for arXiv paper 2603.15970: 100x Cost & Latency Reduction: Performance Analysis of AI Query Approximation using Lightweight...

arXiv - AI · 4 min ·
[2603.10062] Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead
Llms

[2603.10062] Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead

Abstract page for arXiv paper 2603.10062: Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead

arXiv - AI · 3 min ·
[2603.09085] Not All News Is Equal: Topic- and Event-Conditional Sentiment from Finetuned LLMs for Aluminum Price Forecasting
Llms

[2603.09085] Not All News Is Equal: Topic- and Event-Conditional Sentiment from Finetuned LLMs for Aluminum Price Forecasting

Abstract page for arXiv paper 2603.09085: Not All News Is Equal: Topic- and Event-Conditional Sentiment from Finetuned LLMs for Aluminum ...

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime