[2509.24245] Prompt and Parameter Co-Optimization for Large Language Models
About this article
Abstract page for arXiv paper 2509.24245: Prompt and Parameter Co-Optimization for Large Language Models
Computer Science > Computation and Language arXiv:2509.24245 (cs) [Submitted on 29 Sep 2025 (v1), last revised 2 Mar 2026 (this version, v2)] Title:Prompt and Parameter Co-Optimization for Large Language Models Authors:Xiaohe Bo, Rui Li, Zexu Sun, Quanyu Dai, Zeyu Zhang, Zihang Tian, Xu Chen, Zhenhua Dong View a PDF of the paper titled Prompt and Parameter Co-Optimization for Large Language Models, by Xiaohe Bo and 7 other authors View PDF HTML (experimental) Abstract:Prompt optimization and fine-tuning are two major approaches to improve the performance of Large Language Models (LLMs). They enhance the capabilities of LLMs from complementary perspectives: the former through explicit natural language, and the latter through implicit parameter updates. However, prior work has typically studied them in isolation, leaving their synergistic potential largely underexplored. To bridge this gap, in this paper, we introduce MetaTuner, a novel framework that jointly integrates prompt optimization and fine-tuning for LLM training. Specifically, we introduce two neural networks to generate prompts and parameters, respectively, while allowing them to share a common bottom encoding layer to enable knowledge sharing. By the guidance of the final supervised signals, our framework is optimized to discover the optimal combinations between the prompts and parameters. Given that prompt learning involves discrete optimization while fine-tuning operates in a continuous parameter space, we desi...