[2601.08258] Diagnosing and Mitigating Sycophancy and Skepticism in LLM Causal Judgment

[2601.08258] Diagnosing and Mitigating Sycophancy and Skepticism in LLM Causal Judgment

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2601.08258: Diagnosing and Mitigating Sycophancy and Skepticism in LLM Causal Judgment

Computer Science > Artificial Intelligence arXiv:2601.08258 (cs) [Submitted on 13 Jan 2026 (v1), last revised 8 Apr 2026 (this version, v3)] Title:Diagnosing and Mitigating Sycophancy and Skepticism in LLM Causal Judgment Authors:Edward Y. Chang View a PDF of the paper titled Diagnosing and Mitigating Sycophancy and Skepticism in LLM Causal Judgment, by Edward Y. Chang View PDF HTML (experimental) Abstract:Large language models increasingly fail in a way that scalar accuracy cannot diagnose: they produce a sound reasoning trace and then abandon it under social pressure or an authoritative hint. We argue that this is a control failure, not a knowledge failure, and that it requires an evaluation surface richer than a single accuracy number. We introduce CAUSALT3, a 454 instance expert curated benchmark for causal reasoning across all three rungs of Pearl's ladder, and a three axis evaluation that decomposes performance into Utility (sensitivity to valid causal claims), Safety (specificity against invalid ones), and Wise Refusal (calibrated abstention on genuinely underdetermined items). On this surface we document three reproducible pathologies: a Skepticism Trap at L1 where capable models over refuse sound links, a Sycophancy Trap at L2 where confident user pressure flips correct answers, and a Scaling Paradox at L3 where a frontier model underperforms an older one on counterfactual Safety by 55 points. To mitigate these failures without retraining, we propose Regulated Cau...

Originally published on April 09, 2026. Curated by AI News.

Related Articles

Llms

Diffusion for generating/editing ASTs? [D]

I’m not a machine learning expert or anything, but I do enjoy learning about how it all works. I’ve noticed that one of the main limitati...

Reddit - Machine Learning · 1 min ·
ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns | The Verge
Llms

ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns | The Verge

OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and s...

The Verge - AI · 4 min ·
Llms

AI is helpful but still not “there” yet

what I mean is that every time I use Claude, or Grok or any of the AI platforms and tools, I realize how far this technology is from repl...

Reddit - Artificial Intelligence · 1 min ·
ChatGPT Has 'Goblin' Mania in the US. In China It Will 'Catch You Steadily' | WIRED
Llms

ChatGPT Has 'Goblin' Mania in the US. In China It Will 'Catch You Steadily' | WIRED

OpenAI's chatbot has some weird linguistic tics in Chinese that are driving users crazy.

Wired - AI · 8 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime