[2602.15173] Mind the (DH) Gap! A Contrast in Risky Choices Between Reasoning and Conversational LLMs
Summary
This paper examines the decision-making behaviors of large language models (LLMs) under uncertainty, contrasting reasoning models with conversational models in terms of rationality and sensitivity to framing.
Why It Matters
Understanding how LLMs make decisions is crucial as they increasingly support decision-making processes. This study highlights significant differences in behavior between reasoning and conversational models, which can inform future AI development and application in critical areas.
Key Takeaways
- LLMs can be categorized into reasoning models (RMs) and conversational models (CMs) based on their decision-making behaviors.
- RMs exhibit rational behavior and are less influenced by framing and explanation compared to CMs.
- CMs are more sensitive to prospect ordering and framing, indicating a human-like decision-making pattern.
- Training for mathematical reasoning is a key differentiator between RMs and CMs.
- The study provides insights that can enhance the design of AI systems for better decision support.
Computer Science > Artificial Intelligence arXiv:2602.15173 (cs) [Submitted on 16 Feb 2026] Title:Mind the (DH) Gap! A Contrast in Risky Choices Between Reasoning and Conversational LLMs Authors:Luise Ge, Yongyan Zhang, Yevgeniy Vorobeychik View a PDF of the paper titled Mind the (DH) Gap! A Contrast in Risky Choices Between Reasoning and Conversational LLMs, by Luise Ge and 2 other authors View PDF HTML (experimental) Abstract:The use of large language models either as decision support systems, or in agentic workflows, is rapidly transforming the digital ecosystem. However, the understanding of LLM decision-making under uncertainty remains limited. We initiate a comparative study of LLM risky choices along two dimensions: (1) prospect representation (explicit vs. experience based) and (2) decision rationale (explanation). Our study, which involves 20 frontier and open LLMs, is complemented by a matched human subjects experiment, which provides one reference point, while an expected payoff maximizing rational agent model provides another. We find that LLMs cluster into two categories: reasoning models (RMs) and conversational models (CMs). RMs tend towards rational behavior, are insensitive to the order of prospects, gain/loss framing, and explanations, and behave similarly whether prospects are explicit or presented via experience history. CMs are significantly less rational, slightly more human-like, sensitive to prospect ordering, framing, and explanation, and exhibit a...