[2602.17744] Bayesian Optimality of In-Context Learning with Selective State Spaces
Summary
This paper introduces Bayesian optimal sequential prediction as a framework for understanding in-context learning (ICL), demonstrating its advantages over traditional gradient descent methods in specific tasks.
Why It Matters
The research redefines in-context learning by framing it as optimal inference rather than implicit optimization. This shift has significant implications for the design of machine learning architectures, particularly in improving efficiency and performance in tasks with structured noise.
Key Takeaways
- Bayesian optimality provides a new perspective on in-context learning.
- Selective state space models outperform traditional gradient descent methods in specific scenarios.
- The research highlights the importance of statistical efficiency in model design.
Computer Science > Machine Learning arXiv:2602.17744 (cs) [Submitted on 19 Feb 2026] Title:Bayesian Optimality of In-Context Learning with Selective State Spaces Authors:Di Zhang, Jiaqi Xing View a PDF of the paper titled Bayesian Optimality of In-Context Learning with Selective State Spaces, by Di Zhang and 1 other authors View PDF HTML (experimental) Abstract:We propose Bayesian optimal sequential prediction as a new principle for understanding in-context learning (ICL). Unlike interpretations framing Transformers as performing implicit gradient descent, we formalize ICL as meta-learning over latent sequence tasks. For tasks governed by Linear Gaussian State Space Models (LG-SSMs), we prove a meta-trained selective SSM asymptotically implements the Bayes-optimal predictor, converging to the posterior predictive mean. We further establish a statistical separation from gradient descent, constructing tasks with temporally correlated noise where the optimal Bayesian predictor strictly outperforms any empirical risk minimization (ERM) estimator. Since Transformers can be seen as performing implicit ERM, this demonstrates selective SSMs achieve lower asymptotic risk due to superior statistical efficiency. Experiments on synthetic LG-SSM tasks and a character-level Markov benchmark confirm selective SSMs converge faster to Bayes-optimal risk, show superior sample efficiency with longer contexts in structured-noise settings, and track latent states more robustly than linear Tran...