[2602.18971] When Do LLM Preferences Predict Downstream Behavior?

[2602.18971] When Do LLM Preferences Predict Downstream Behavior?

arXiv - AI 4 min read Article

Summary

This article investigates how preferences in large language models (LLMs) influence their downstream behavior, particularly in donation advice and task performance.

Why It Matters

Understanding the relationship between LLM preferences and their behavior is crucial for addressing potential AI misalignment issues. This research sheds light on how these models operate beyond explicit instructions, which is vital for developing safer AI systems.

Key Takeaways

  • LLMs exhibit consistent preferences that influence their advice-giving behavior.
  • Preference-driven behaviors were observed without explicit instructions.
  • The correlation between preferences and task performance varies among models.

Computer Science > Artificial Intelligence arXiv:2602.18971 (cs) [Submitted on 21 Feb 2026] Title:When Do LLM Preferences Predict Downstream Behavior? Authors:Katarina Slama, Alexandra Souly, Dishank Bansal, Henry Davidson, Christopher Summerfield, Lennart Luettgau View a PDF of the paper titled When Do LLM Preferences Predict Downstream Behavior?, by Katarina Slama and 5 other authors View PDF HTML (experimental) Abstract:Preference-driven behavior in LLMs may be a necessary precondition for AI misalignment such as sandbagging: models cannot strategically pursue misaligned goals unless their behavior is influenced by their preferences. Yet prior work has typically prompted models explicitly to act in specific ways, leaving unclear whether observed behaviors reflect instruction-following capabilities vs underlying model preferences. Here we test whether this precondition for misalignment is present. Using entity preferences as a behavioral probe, we measure whether stated preferences predict downstream behavior in five frontier LLMs across three domains: donation advice, refusal behavior, and task performance. Conceptually replicating prior work, we first confirm that all five models show highly consistent preferences across two independent measurement methods. We then test behavioral consequences in a simulated user environment. We find that all five models give preference-aligned donation advice. All five models also show preference-correlated refusal patterns when asked...

Related Articles

Llms

Claude Opus 4.6 API at 40% below Anthropic pricing – try free before you pay anything

Hey everyone I've set up a self-hosted API gateway using [New-API](QuantumNous/new-ap) to manage and distribute Claude Opus 4.6 access ac...

Reddit - Artificial Intelligence · 1 min ·
Hackers Are Posting the Claude Code Leak With Bonus Malware | WIRED
Llms

Hackers Are Posting the Claude Code Leak With Bonus Malware | WIRED

Plus: The FBI says a recent hack of its wiretap tools poses a national security risk, attackers stole Cisco source code as part of an ong...

Wired - AI · 9 min ·
Llms

People anxious about deviating from what AI tells them to do?

My friend came over yesterday to dye her hair. She had asked ChatGPT for the 'correct' way to do it. Chat told her to dye the ends first,...

Reddit - Artificial Intelligence · 1 min ·
Llms

ChatGPT on trial: A landmark test of AI liability in the practice of law

AI Tools & Products ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime