[2506.11103] You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Models
About this article
Abstract page for arXiv paper 2506.11103: You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Models
Computer Science > Computation and Language arXiv:2506.11103 (cs) [Submitted on 6 Jun 2025 (v1), last revised 3 Mar 2026 (this version, v2)] Title:You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Models Authors:Wenchong He, Liqian Peng, Zhe Jiang, Alex Go View a PDF of the paper titled You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Models, by Wenchong He and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) possess a remarkable ability to perform in-context learning (ICL), which enables them to handle multiple downstream tasks simultaneously without requiring task-specific fine-tuning. Recent studies have shown that even moderately sized LLMs, such as Mistral 7B, Gemma 7B and Llama-3 8B, can achieve ICL through few-shot in-context fine-tuning of all tasks at once. However, this approach still lags behind dedicated fine-tuning, where a separate model is trained for each individual task. In this paper, we propose a novel approach, Many-Shot In-Context Fine-tuning (ManyICL), which significantly narrows this performance gap by extending the principles of ICL to a many-shot setting. To unlock the full potential of ManyICL and address the inherent inefficiency of processing long sequences with numerous in-context examples, we propose a novel training objective. Instead of solely predicting the final answer, our approach treats every answer within the context as a supervised training target...