[2510.23006] Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures
About this article
Abstract page for arXiv paper 2510.23006: Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures
Computer Science > Computation and Language arXiv:2510.23006 (cs) [Submitted on 27 Oct 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures Authors:Shenran Wang, Timothy Tin-Long Tse, Jian Zhu View a PDF of the paper titled Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures, by Shenran Wang and 2 other authors View PDF HTML (experimental) Abstract:We perform in-depth evaluations of in-context learning (ICL) on state-of-the-art transformer, state-space, and hybrid large language models over two categories of knowledge-based ICL tasks. Using a combination of behavioral probing and intervention-based methods, we have discovered that, while LLMs of different architectures can behave similarly in task performance, their internals could remain different. We discover that function vectors (FVs) responsible for ICL are primarily located in the self-attention and Mamba layers, and speculate that Mamba2 uses a different mechanism from FVs to perform ICL. FVs are more important for ICL involving parametric knowledge retrieval, but not for contextual knowledge understanding. Our work contributes to a more nuanced understanding across architectures and task types. Methodologically, our approach also highlights the importance of combining both behavioural and mechanistic analyses to investigate LLM capabilit...