[2510.26784] LLMs Process Lists With General Filter Heads
Summary
This paper explores how large language models (LLMs) process list-based tasks using filter heads, revealing their ability to encode general filtering operations akin to functional programming.
Why It Matters
Understanding how LLMs manage list-processing tasks is crucial for improving their interpretability and efficiency. This research highlights the potential for LLMs to generalize computational strategies, which can enhance their application across diverse tasks and formats.
Key Takeaways
- LLMs can encode a compact representation of filtering operations.
- A small number of attention heads, termed filter heads, are responsible for this encoding.
- The filtering predicate representation is general and can be applied across different tasks.
- LLMs may also use an alternative strategy by storing intermediate results directly in item representations.
- The findings suggest LLMs can implement abstract computational operations similarly to traditional programming.
Computer Science > Artificial Intelligence arXiv:2510.26784 (cs) [Submitted on 30 Oct 2025 (v1), last revised 23 Feb 2026 (this version, v2)] Title:LLMs Process Lists With General Filter Heads Authors:Arnab Sen Sharma, Giordano Rogers, Natalie Shapira, David Bau View a PDF of the paper titled LLMs Process Lists With General Filter Heads, by Arnab Sen Sharma and 3 other authors View PDF HTML (experimental) Abstract:We investigate the mechanisms underlying a range of list-processing tasks in LLMs, and we find that LLMs have learned to encode a compact, causal representation of a general filtering operation that mirrors the generic "filter" function of functional programming. Using causal mediation analysis on a diverse set of list-processing tasks, we find that a small number of attention heads, which we dub filter heads, encode a compact representation of the filtering predicate in their query states at certain tokens. We demonstrate that this predicate representation is general and portable: it can be extracted and reapplied to execute the same filtering operation on different collections, presented in different formats, languages, or even in tasks. However, we also identify situations where transformer LMs can exploit a different strategy for filtering: eagerly evaluating if an item satisfies the predicate and storing this intermediate result as a flag directly in the item representations. Our results reveal that transformer LMs can develop human-interpretable implementat...