[2604.03925] AdaptFuse: Training-Free Sequential Preference Learning via Externalized Bayesian Inference
About this article
Abstract page for arXiv paper 2604.03925: AdaptFuse: Training-Free Sequential Preference Learning via Externalized Bayesian Inference
Computer Science > Computation and Language arXiv:2604.03925 (cs) [Submitted on 5 Apr 2026] Title:AdaptFuse: Training-Free Sequential Preference Learning via Externalized Bayesian Inference Authors:Fangzhou Lin, Peiran Li, Shuo Xing, Siyuan Yang, Qianwen Ge, Kazunori Yamada, Ziming Zhang, Haichong Zhang, Zhengzhong Tu View a PDF of the paper titled AdaptFuse: Training-Free Sequential Preference Learning via Externalized Bayesian Inference, by Fangzhou Lin and 8 other authors View PDF HTML (experimental) Abstract:Large language models struggle to accumulate evidence across multiple rounds of user interaction, failing to update their beliefs in a manner consistent with Bayesian inference. Existing solutions require fine-tuning on sensitive user interaction data, limiting their applicability in privacy-conscious settings. We propose AdaptFuse, a training-free framework that externalizes probabilistic computation entirely from the LLM: a symbolic module maintains a Bayesian posterior over a discrete hypothesis set, while a frozen LLM contributes semantic reasoning via multi-sample Dirichlet aggregation. The two signals are combined through entropy-adaptive fusion, which automatically weights each source by its predictive confidence, shifting reliance from the LLM to the symbolic posterior as evidence accumulates. We evaluate across three domains: flight recommendation, hotel recommendation, and web shopping; on Gemma 2 9B, Llama 3 8B, and Qwen 2.5 7B. AdaptFuse consistently ou...