[2511.10465] Beyond Elicitation: Provision-based Prompt Optimization for Knowledge-Intensive Tasks
About this article
Abstract page for arXiv paper 2511.10465: Beyond Elicitation: Provision-based Prompt Optimization for Knowledge-Intensive Tasks
Computer Science > Computation and Language arXiv:2511.10465 (cs) [Submitted on 13 Nov 2025 (v1), last revised 28 Mar 2026 (this version, v2)] Title:Beyond Elicitation: Provision-based Prompt Optimization for Knowledge-Intensive Tasks Authors:Yunzhe Xu, Zhuosheng Zhang, Zhe Liu View a PDF of the paper titled Beyond Elicitation: Provision-based Prompt Optimization for Knowledge-Intensive Tasks, by Yunzhe Xu and 2 other authors View PDF HTML (experimental) Abstract:While prompt optimization has emerged as a critical technique for enhancing language model performance, existing approaches primarily focus on elicitation-based strategies that search for optimal prompts to activate models' capabilities. These methods exhibit fundamental limitations when addressing knowledge-intensive tasks, as they operate within static knowledge capacity rather than providing the factual knowledge, terminology precision, and reasoning patterns required in specialized domains. To address these limitations, we propose Knowledge-Provision-based Prompt Optimization (KPPO), a framework that reformulates prompt optimization as systematic knowledge integration rather than potential elicitation. KPPO introduces three key innovations: 1) a knowledge gap filling mechanism for knowledge gap identification and targeted remediation; 2) a batch-wise candidate evaluation approach that considers both performance improvement and distributional stability; 3) an adaptive knowledge pruning strategy that balances pe...