[2602.17881] Understanding Unreliability of Steering Vectors in Language Models: Geometric Predictors and the Limits of Linear Approximations

[2602.17881] Understanding Unreliability of Steering Vectors in Language Models: Geometric Predictors and the Limits of Linear Approximations

arXiv - Machine Learning 3 min read Article

Summary

This paper explores the unreliability of steering vectors in language models, examining how geometric predictors and linear approximations impact their effectiveness.

Why It Matters

Understanding the limitations of steering vectors is crucial for improving the reliability of language models. This research highlights how training data and behavior representation affect steering efficacy, which can lead to more robust AI systems.

Key Takeaways

  • Steering vectors can control language model behavior but are often unreliable.
  • Higher cosine similarity in training data correlates with more reliable steering.
  • Better separation of positive and negative activations enhances steerability.
  • Directionally distinct steering vectors can yield similar performance across datasets.
  • Non-linear behavior representations must be considered for improved steering methods.

Computer Science > Computation and Language arXiv:2602.17881 (cs) [Submitted on 19 Feb 2026] Title:Understanding Unreliability of Steering Vectors in Language Models: Geometric Predictors and the Limits of Linear Approximations Authors:Joschka Braun View a PDF of the paper titled Understanding Unreliability of Steering Vectors in Language Models: Geometric Predictors and the Limits of Linear Approximations, by Joschka Braun View PDF Abstract:Steering vectors are a lightweight method for controlling language model behavior by adding a learned bias to the activations at inference time. Although effective on average, steering effect sizes vary across samples and are unreliable for many target behaviors. In my thesis, I investigate why steering reliability differs across behaviors and how it is impacted by steering vector training data. First, I find that higher cosine similarity between training activation differences predicts more reliable steering. Second, I observe that behavior datasets where positive and negative activations are better separated along the steering direction are more reliably steerable. Finally, steering vectors trained on different prompt variations are directionally distinct, yet perform similarly well and exhibit correlated efficacy across datasets. My findings suggest that steering vectors are unreliable when the latent target behavior representation is not effectively approximated by the linear steering direction. Taken together, these insights offer...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic blocks OpenClaw from Claude subscriptions
Llms

Anthropic blocks OpenClaw from Claude subscriptions

Anthropic forces pay-as-you-go pricing for OpenClaw users after creator joins OpenAI

AI Tools & Products · 6 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime