[2603.03524] Test-Time Meta-Adaptation with Self-Synthesis
About this article
Abstract page for arXiv paper 2603.03524: Test-Time Meta-Adaptation with Self-Synthesis
Computer Science > Machine Learning arXiv:2603.03524 (cs) [Submitted on 3 Mar 2026] Title:Test-Time Meta-Adaptation with Self-Synthesis Authors:Zeyneb N. Kaya, Nick Rui View a PDF of the paper titled Test-Time Meta-Adaptation with Self-Synthesis, by Zeyneb N. Kaya and 1 other authors View PDF HTML (experimental) Abstract:As strong general reasoners, large language models (LLMs) encounter diverse domains and tasks, where the ability to adapt and self-improve at test time is valuable. We introduce MASS, a meta-learning framework that enables LLMs to self-adapt by generating problem-specific synthetic training data and performing targeted self-updates optimized for downstream performance at inference time. We train this behavior end-to-end via bilevel optimization: an inner loop adapts on self-generated examples while an outer loop meta-learns data-attribution signals and rewards post-update task performance. The synthetic data is optimized with scalable meta-gradients, backpropagating the downstream loss through the inner updates to reward useful generations. Experiments on mathematical reasoning show that MASS learns to synthesize per-instance curricula that yield effective, data-efficient test-time adaptation. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.03524 [cs.LG] (or arXiv:2603.03524v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2603.03524 Focus to learn more arXiv-issued DOI via DataCite (pending r...