[2602.15173] Mind the (DH) Gap! A Contrast in Risky Choices Between Reasoning and Conversational LLMs

[2602.15173] Mind the (DH) Gap! A Contrast in Risky Choices Between Reasoning and Conversational LLMs

arXiv - AI 3 min read Article

Summary

This paper examines the decision-making behaviors of large language models (LLMs) under uncertainty, contrasting reasoning models with conversational models in terms of rationality and sensitivity to framing.

Why It Matters

Understanding how LLMs make decisions is crucial as they increasingly support decision-making processes. This study highlights significant differences in behavior between reasoning and conversational models, which can inform future AI development and application in critical areas.

Key Takeaways

  • LLMs can be categorized into reasoning models (RMs) and conversational models (CMs) based on their decision-making behaviors.
  • RMs exhibit rational behavior and are less influenced by framing and explanation compared to CMs.
  • CMs are more sensitive to prospect ordering and framing, indicating a human-like decision-making pattern.
  • Training for mathematical reasoning is a key differentiator between RMs and CMs.
  • The study provides insights that can enhance the design of AI systems for better decision support.

Computer Science > Artificial Intelligence arXiv:2602.15173 (cs) [Submitted on 16 Feb 2026] Title:Mind the (DH) Gap! A Contrast in Risky Choices Between Reasoning and Conversational LLMs Authors:Luise Ge, Yongyan Zhang, Yevgeniy Vorobeychik View a PDF of the paper titled Mind the (DH) Gap! A Contrast in Risky Choices Between Reasoning and Conversational LLMs, by Luise Ge and 2 other authors View PDF HTML (experimental) Abstract:The use of large language models either as decision support systems, or in agentic workflows, is rapidly transforming the digital ecosystem. However, the understanding of LLM decision-making under uncertainty remains limited. We initiate a comparative study of LLM risky choices along two dimensions: (1) prospect representation (explicit vs. experience based) and (2) decision rationale (explanation). Our study, which involves 20 frontier and open LLMs, is complemented by a matched human subjects experiment, which provides one reference point, while an expected payoff maximizing rational agent model provides another. We find that LLMs cluster into two categories: reasoning models (RMs) and conversational models (CMs). RMs tend towards rational behavior, are insensitive to the order of prospects, gain/loss framing, and explanations, and behave similarly whether prospects are explicit or presented via experience history. CMs are significantly less rational, slightly more human-like, sensitive to prospect ordering, framing, and explanation, and exhibit a...

Related Articles

Llms

[D] How's MLX and jax/ pytorch on MacBooks these days?

​ So I'm looking at buying a new 14 inch MacBook pro with m5 pro and 64 gb of memory vs m4 max with same specs. My priorities are pro sof...

Reddit - Machine Learning · 1 min ·
Llms

[R] 94.42% on BANKING77 Official Test Split with Lightweight Embedding + Example Reranking (strict full-train protocol)

BANKING77 (77 fine-grained banking intents) is a well-established but increasingly saturated intent classification benchmark. did this wh...

Reddit - Machine Learning · 1 min ·
The “Agony” or ChatGPT: Would You Let AI Write Your Wedding Speech?
Llms

The “Agony” or ChatGPT: Would You Let AI Write Your Wedding Speech?

As more Americans use AI chatbots like ChatGPT to compose their wedding vows, one expert asks: “Is the speech sacred to you?”

AI Tools & Products · 12 min ·
I tested Gemini on Android Auto and now I can't stop talking to it: 5 tasks it nails
Llms

I tested Gemini on Android Auto and now I can't stop talking to it: 5 tasks it nails

I didn't see much benefit for Google's AI - until now. Here are my favorite ways to use the new Gemini integration in my car.

AI Tools & Products · 7 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime