[2604.06946] An empirical study of LoRA-based fine-tuning of large language models for automated test case generation
About this article
Abstract page for arXiv paper 2604.06946: An empirical study of LoRA-based fine-tuning of large language models for automated test case generation
Computer Science > Software Engineering arXiv:2604.06946 (cs) [Submitted on 8 Apr 2026] Title:An empirical study of LoRA-based fine-tuning of large language models for automated test case generation Authors:Milad Moradi, Ke Yan, David Colwell, Rhona Asgari View a PDF of the paper titled An empirical study of LoRA-based fine-tuning of large language models for automated test case generation, by Milad Moradi and 3 other authors View PDF Abstract:Automated test case generation from natural language requirements remains a challenging problem in software engineering due to the ambiguity of requirements and the need to produce structured, executable test artifacts. Recent advances in LLMs have shown promise in addressing this task; however, their effectiveness depends on task-specific adaptation and efficient fine-tuning strategies. In this paper, we present a comprehensive empirical study on the use of parameter-efficient fine-tuning, specifically LoRA, for requirement-based test case generation. We evaluate multiple LLM families, including open-source and proprietary models, under a unified experimental pipeline. The study systematically explores the impact of key LoRA hyperparameters, including rank, scaling factor, and dropout, on downstream performance. We propose an automated evaluation framework based on GPT-4o, which assesses generated test cases across nine quality dimensions. Experimental results demonstrate that LoRA-based fine-tuning significantly improves the perfor...