[2603.01712] FT-Dojo: Towards Autonomous LLM Fine-Tuning with Language Agents
About this article
Abstract page for arXiv paper 2603.01712: FT-Dojo: Towards Autonomous LLM Fine-Tuning with Language Agents
Computer Science > Artificial Intelligence arXiv:2603.01712 (cs) [Submitted on 2 Mar 2026] Title:FT-Dojo: Towards Autonomous LLM Fine-Tuning with Language Agents Authors:Qizheng Li, Yifei Zhang, Xiao Yang, Xu Yang, Zhuo Wang, Weiqing Liu, Jiang Bian View a PDF of the paper titled FT-Dojo: Towards Autonomous LLM Fine-Tuning with Language Agents, by Qizheng Li and 6 other authors View PDF HTML (experimental) Abstract:Fine-tuning large language models for vertical domains remains a labor-intensive and expensive process, requiring domain experts to curate data, configure training, and iteratively diagnose model behavior. Despite growing interest in autonomous machine learning, no prior work has tackled end-to-end LLM fine-tuning with agents. Can LLM-based agents automate this complete process? We frame this as a substantially open problem: agents must navigate an open-ended search space spanning data curation from diverse data sources, processing with complex tools, building a training pipeline, and iteratively refining their approach based on evaluation outcomes in rapidly growing logs--an overall scenario far more intricate than existing benchmarks. To study this question, we introduce FT-Dojo, an interactive environment comprising 13 tasks across 5 domains. We further develop FT-Agent, an autonomous system that mirrors human experts by leveraging evaluation-driven feedback to iteratively diagnose failures and refine fine-tuning strategies. Experiments on FT-Dojo demonstrate...