[2603.02874] Retrievit: In-context Retrieval Capabilities of Transformers, State Space Models, and Hybrid Architectures
About this article
Abstract page for arXiv paper 2603.02874: Retrievit: In-context Retrieval Capabilities of Transformers, State Space Models, and Hybrid Architectures
Computer Science > Artificial Intelligence arXiv:2603.02874 (cs) [Submitted on 3 Mar 2026] Title:Retrievit: In-context Retrieval Capabilities of Transformers, State Space Models, and Hybrid Architectures Authors:Georgios Pantazopoulos, Malvina Nikandrou, Ioannis Konstas, Alessandro Suglia View a PDF of the paper titled Retrievit: In-context Retrieval Capabilities of Transformers, State Space Models, and Hybrid Architectures, by Georgios Pantazopoulos and 3 other authors View PDF HTML (experimental) Abstract:Transformers excel at in-context retrieval but suffer from quadratic complexity with sequence length, while State Space Models (SSMs) offer efficient linear-time processing but have limited retrieval capabilities. We investigate whether hybrid architectures combining Transformers and SSMs can achieve the best of both worlds on two synthetic in-context retrieval tasks. The first task, n-gram retrieval, requires the model to identify and reproduce an n-gram that succeeds the query within the input sequence. The second task, position retrieval, presents the model with a single query token and requires it to perform a two-hop associative lookup: first locating the corresponding element in the sequence, and then outputting its positional index. Under controlled experimental conditions, we assess data efficiency, length generalization, robustness to out of domain training examples, and learned representations across Transformers, SSMs, and hybrid architectures. We find that h...