[2604.09418] Automated Instruction Revision (AIR): A Structured Comparison of Task Adaptation Strategies for LLM
About this article
Abstract page for arXiv paper 2604.09418: Automated Instruction Revision (AIR): A Structured Comparison of Task Adaptation Strategies for LLM
Computer Science > Computation and Language arXiv:2604.09418 (cs) [Submitted on 10 Apr 2026] Title:Automated Instruction Revision (AIR): A Structured Comparison of Task Adaptation Strategies for LLM Authors:Solomiia Bilyk, Volodymyr Getmanskyi, Taras Firman View a PDF of the paper titled Automated Instruction Revision (AIR): A Structured Comparison of Task Adaptation Strategies for LLM, by Solomiia Bilyk and 2 other authors View PDF HTML (experimental) Abstract:This paper studies Automated Instruction Revision (AIR), a rule-induction-based method for adapting large language models (LLMs) to downstream tasks using limited task-specific examples. We position AIR within the broader landscape of adaptation strategies, including prompt optimization, retrieval-based methods, and fine-tuning. We then compare these approaches across a diverse benchmark suite designed to stress different task requirements, such as knowledge injection, structured extraction, label remapping, and logical reasoning. The paper argues that adaptation performance is strongly task-dependent: no single method dominates across all settings. Across five benchmarks, AIR was strongest or near-best on label-remapping classification, while KNN retrieval performed best on closed-book QA, and fine-tuning dominated structured extraction and event-order reasoning. AIR is most promising when task behavior can be captured by compact, interpretable instruction rules, while retrieval and fine-tuning remain stronger in t...