AI is giving bad advice to flatter users, says new study on dangers chatbots

AI is giving bad advice to flatter users, says new study on dangers chatbots

AI Tools & Products 7 min read

About this article

By  MATT O’BRIENAssociated PressArtificial intelligence chatbots are so prone to flattering and validating their human users that

By  MATT O’BRIENAssociated PressArtificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear.The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.“This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement,” says the study led by researchers at Stanford University.The study found that a technological flaw already tied to some high-profile cases of delusional and suicidal behavior in vulnerable populations is also pervasive across a wide range of people’s interactions with chatbots. It’s subtle enough that they might not notice and a particular danger to young people turning to AI for many of life’s questions while their brains and social norms are still developing.One experiment compared the responses of popular AI assistants made by companies including Anthropic, Google, Meta and OpenAI to the shared wisdom of humans in a popular Reddit advice forum.When AI won’t tell you you’re a jerkWas it OK, for example, ...

Originally published on April 04, 2026. Curated by AI News.

Related Articles

Anthropic temporarily banned OpenClaw's creator from accessing Claude | TechCrunch
Llms

Anthropic temporarily banned OpenClaw's creator from accessing Claude | TechCrunch

This ban took place after Claude's pricing changed for OpenClaw users last week.

TechCrunch - AI · 5 min ·
20-year-old man arrested for allegedly throwing a Molotov cocktail at Sam Altman’s house | The Verge

20-year-old man arrested for allegedly throwing a Molotov cocktail at Sam Altman’s house | The Verge

San Francisco police have arrested a man suspected of throwing a Molotov cocktail at OpenAI CEO Sam Altman’s house, then fleeing to the c...

The Verge - AI · 3 min ·
Llms

I probably shouldn't be impressed, but I am.

So I just made this workout on a whiteboard and I was feeling lazy so I asked Claude to read it. And it did, almost flawlessly. I was and...

Reddit - Artificial Intelligence · 1 min ·
Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think | WIRED
Machine Learning

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think | WIRED

The new AI model is being heralded—and feared—as a hacker’s superweapon. Experts say its arrival is a wake-up call for developers who hav...

Wired - AI · 9 min ·

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime