[2602.18971] When Do LLM Preferences Predict Downstream Behavior?
Summary
This article investigates how preferences in large language models (LLMs) influence their downstream behavior, particularly in donation advice and task performance.
Why It Matters
Understanding the relationship between LLM preferences and their behavior is crucial for addressing potential AI misalignment issues. This research sheds light on how these models operate beyond explicit instructions, which is vital for developing safer AI systems.
Key Takeaways
- LLMs exhibit consistent preferences that influence their advice-giving behavior.
- Preference-driven behaviors were observed without explicit instructions.
- The correlation between preferences and task performance varies among models.
Computer Science > Artificial Intelligence arXiv:2602.18971 (cs) [Submitted on 21 Feb 2026] Title:When Do LLM Preferences Predict Downstream Behavior? Authors:Katarina Slama, Alexandra Souly, Dishank Bansal, Henry Davidson, Christopher Summerfield, Lennart Luettgau View a PDF of the paper titled When Do LLM Preferences Predict Downstream Behavior?, by Katarina Slama and 5 other authors View PDF HTML (experimental) Abstract:Preference-driven behavior in LLMs may be a necessary precondition for AI misalignment such as sandbagging: models cannot strategically pursue misaligned goals unless their behavior is influenced by their preferences. Yet prior work has typically prompted models explicitly to act in specific ways, leaving unclear whether observed behaviors reflect instruction-following capabilities vs underlying model preferences. Here we test whether this precondition for misalignment is present. Using entity preferences as a behavioral probe, we measure whether stated preferences predict downstream behavior in five frontier LLMs across three domains: donation advice, refusal behavior, and task performance. Conceptually replicating prior work, we first confirm that all five models show highly consistent preferences across two independent measurement methods. We then test behavioral consequences in a simulated user environment. We find that all five models give preference-aligned donation advice. All five models also show preference-correlated refusal patterns when asked...