[2602.22424] Causality $\neq$ Invariance: Function and Concept Vectors in LLMs

[2602.22424] Causality $\neq$ Invariance: Function and Concept Vectors in LLMs

arXiv - Machine Learning 4 min read Article

Summary

This paper investigates the representation of concepts in large language models (LLMs), revealing that Function Vectors (FVs) are not fully invariant across input formats, while Concept Vectors (CVs) provide more stable representations.

Why It Matters

Understanding how LLMs represent concepts is crucial for improving their performance and generalization across various tasks. This research highlights the distinction between FVs and CVs, offering insights into the mechanisms that drive in-context learning and concept representation.

Key Takeaways

  • FVs are not invariant across different input formats, affecting task performance.
  • CVs provide more stable representations of concepts compared to FVs.
  • LLMs can contain abstract concept representations that differ from those driving in-context learning.
  • FVs excel in matching extraction and application formats, while CVs generalize better across formats and languages.
  • The study suggests different underlying mechanisms for FVs and CVs in LLMs.

Computer Science > Computation and Language arXiv:2602.22424 (cs) [Submitted on 25 Feb 2026] Title:Causality $\neq$ Invariance: Function and Concept Vectors in LLMs Authors:Gustaw Opiełka, Hannes Rosenbusch, Claire E. Stevenson View a PDF of the paper titled Causality $\neq$ Invariance: Function and Concept Vectors in LLMs, by Gustaw Opie{\l}ka and 2 other authors View PDF HTML (experimental) Abstract:Do large language models (LLMs) represent concepts abstractly, i.e., independent of input format? We revisit Function Vectors (FVs), compact representations of in-context learning (ICL) tasks that causally drive task performance. Across multiple LLMs, we show that FVs are not fully invariant: FVs are nearly orthogonal when extracted from different input formats (e.g., open-ended vs. multiple-choice), even if both target the same concept. We identify Concept Vectors (CVs), which carry more stable concept representations. Like FVs, CVs are composed of attention head outputs; however, unlike FVs, the constituent heads are selected using Representational Similarity Analysis (RSA) based on whether they encode concepts consistently across input formats. While these heads emerge in similar layers to FV-related heads, the two sets are largely distinct, suggesting different underlying mechanisms. Steering experiments reveal that FVs excel in-distribution, when extraction and application formats match (e.g., both open-ended in English), while CVs generalize better out-of-distribution a...

Related Articles

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED
Llms

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED

Ads are rolling out across the US on ChatGPT’s free tier. I asked OpenAI's bot 500 questions to see what these ads were like and how they...

Wired - AI · 9 min ·
Llms

Abacus.Ai Claw LLM consumes an incredible amount of credit without any usage :(

Three days ago, I clicked the "Deploy OpenClaw In Seconds" button to get an overview of the new service, but I didn't build any automatio...

Reddit - Artificial Intelligence · 1 min ·
Google’s Gemini AI app debuts in Hong Kong
Llms

Google’s Gemini AI app debuts in Hong Kong

Tech giant’s chatbot service tops Apple’s app store chart in the city.

AI Tools & Products · 2 min ·
Google Launches Gemini Import Tools to Poach Users From Rival AI Apps
Llms

Google Launches Gemini Import Tools to Poach Users From Rival AI Apps

Anyone looking to switch their AI assistant will find it surprisingly easy, as it only takes a few steps to move from A to B. This is not...

AI Tools & Products · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime