[2603.03298] TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation
About this article
Abstract page for arXiv paper 2603.03298: TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation
Computer Science > Computation and Language arXiv:2603.03298 (cs) [Submitted on 6 Feb 2026] Title:TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation Authors:Bartosz Dziuba, Kacper Kuchta, Paweł Batorski, Przemysław Spurek, Paul Swoboda View a PDF of the paper titled TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation, by Bartosz Dziuba and 4 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have improved substantially alignment, yet their behavior remains highly sensitive to prompt phrasing. This brittleness has motivated automated prompt engineering, but most existing methods (i) require a task-specific training set, (ii) rely on expensive iterative optimization to produce a single dataset-level prompt, and (iii) must be rerun from scratch for each new task. We introduce TATRA, a dataset-free prompting method that constructs instance-specific few-shot prompts by synthesizing on-the-fly examples to accompany a user-provided instruction. TATRA requires no labeled training data and avoids task-specific optimization loops, while retaining the benefits of demonstration-based prompting. Across standard text classification benchmarks, TATRA matches or improves over strong prompt-optimization baselines that depend on training data and extensive search. On mathematical reasoning benchmarks, TATRA achieves state-of-the-art performance on GSM8K and DeepMath, outperforming methods that exp...