[2602.22424] Causality $\neq$ Invariance: Function and Concept Vectors in LLMs
Summary
This paper investigates the representation of concepts in large language models (LLMs), revealing that Function Vectors (FVs) are not fully invariant across input formats, while Concept Vectors (CVs) provide more stable representations.
Why It Matters
Understanding how LLMs represent concepts is crucial for improving their performance and generalization across various tasks. This research highlights the distinction between FVs and CVs, offering insights into the mechanisms that drive in-context learning and concept representation.
Key Takeaways
- FVs are not invariant across different input formats, affecting task performance.
- CVs provide more stable representations of concepts compared to FVs.
- LLMs can contain abstract concept representations that differ from those driving in-context learning.
- FVs excel in matching extraction and application formats, while CVs generalize better across formats and languages.
- The study suggests different underlying mechanisms for FVs and CVs in LLMs.
Computer Science > Computation and Language arXiv:2602.22424 (cs) [Submitted on 25 Feb 2026] Title:Causality $\neq$ Invariance: Function and Concept Vectors in LLMs Authors:Gustaw Opiełka, Hannes Rosenbusch, Claire E. Stevenson View a PDF of the paper titled Causality $\neq$ Invariance: Function and Concept Vectors in LLMs, by Gustaw Opie{\l}ka and 2 other authors View PDF HTML (experimental) Abstract:Do large language models (LLMs) represent concepts abstractly, i.e., independent of input format? We revisit Function Vectors (FVs), compact representations of in-context learning (ICL) tasks that causally drive task performance. Across multiple LLMs, we show that FVs are not fully invariant: FVs are nearly orthogonal when extracted from different input formats (e.g., open-ended vs. multiple-choice), even if both target the same concept. We identify Concept Vectors (CVs), which carry more stable concept representations. Like FVs, CVs are composed of attention head outputs; however, unlike FVs, the constituent heads are selected using Representational Similarity Analysis (RSA) based on whether they encode concepts consistently across input formats. While these heads emerge in similar layers to FV-related heads, the two sets are largely distinct, suggesting different underlying mechanisms. Steering experiments reveal that FVs excel in-distribution, when extraction and application formats match (e.g., both open-ended in English), while CVs generalize better out-of-distribution a...