[2604.00533] Learning from Many and Adapting to the Unknown in Open-set Test Streams
About this article
Abstract page for arXiv paper 2604.00533: Learning from Many and Adapting to the Unknown in Open-set Test Streams
Computer Science > Machine Learning arXiv:2604.00533 (cs) [Submitted on 1 Apr 2026] Title:Learning from Many and Adapting to the Unknown in Open-set Test Streams Authors:Xiao Zhang, Juntao Lyu, Tianyu Hu, Qianchuan Zhao, Huimin Ma View a PDF of the paper titled Learning from Many and Adapting to the Unknown in Open-set Test Streams, by Xiao Zhang and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) generalize across tasks via reusable representations and flexible reasoning, yet remain brittle in real deployment under evolving tasks and continual distribution shift. A common approach is Test-Time Adaptation (TTA), existing ones of which updates models with hand-designed unsupervised objectives over the full parameter space and mostly overlook preserving shared source knowledge and the reliability of adaptation signals. Drawing on molecular signaling cascades of memory updating in Drosophila, we propose Synapse Consolidation (SyCo), a parameter-efficient LLM adaptation method that updates low-rank adapters through Rac1 and MAPK pathways under the guidance of a structured TTA objective driven by problem understanding, process understanding, and source-domain guardrail. Rac1 confines plasticity to a tail-gradient subspace that is less critical for source knowledge, enabling rapid specialization while preserving source representations. MAPK uses a tiered controller to suppress noisy updates and consolidate useful adaptations under non-stationar...