[2510.23006] Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures

[2510.23006] Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2510.23006: Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures

Computer Science > Computation and Language arXiv:2510.23006 (cs) [Submitted on 27 Oct 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures Authors:Shenran Wang, Timothy Tin-Long Tse, Jian Zhu View a PDF of the paper titled Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures, by Shenran Wang and 2 other authors View PDF HTML (experimental) Abstract:We perform in-depth evaluations of in-context learning (ICL) on state-of-the-art transformer, state-space, and hybrid large language models over two categories of knowledge-based ICL tasks. Using a combination of behavioral probing and intervention-based methods, we have discovered that, while LLMs of different architectures can behave similarly in task performance, their internals could remain different. We discover that function vectors (FVs) responsible for ICL are primarily located in the self-attention and Mamba layers, and speculate that Mamba2 uses a different mechanism from FVs to perform ICL. FVs are more important for ICL involving parametric knowledge retrieval, but not for contextual knowledge understanding. Our work contributes to a more nuanced understanding across architectures and task types. Methodologically, our approach also highlights the importance of combining both behavioural and mechanistic analyses to investigate LLM capabilit...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

Llms

This Is Not Hacking. This Is Structured Intelligence.

Watch me demonstrate everything I've been talking about—live, in real time. The Setup: Maestro University AI enrollment system Standard c...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] Howcome Muon is only being used for Transformers?

Muon has quickly been adopted in LLM training, yet we don't see it being talked about in other contexts. Searches for Muon on ConvNets tu...

Reddit - Machine Learning · 1 min ·
Llms

[P] I trained a language model from scratch for a low resource language and got it running fully on-device on Android (no GPU, demo)

Hi Everybody! I just wanted to share an update on a project I’ve been working on called BULaMU, a family of language models trained (20M,...

Reddit - Machine Learning · 1 min ·
Paper Finds That Leading AI Chatbots Like ChatGPT and Claude Remain Incredibly Sycophantic, Resulting in Twisted Effects on Users
Llms

Paper Finds That Leading AI Chatbots Like ChatGPT and Claude Remain Incredibly Sycophantic, Resulting in Twisted Effects on Users

A study found that sycophancy is pervasive among chatbots, and that bots are more likely than human peers to affirm a person's bad behavior.

AI Tools & Products · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime