[2602.18092] Perceived Political Bias in LLMs Reduces Persuasive Abilities

[2602.18092] Perceived Political Bias in LLMs Reduces Persuasive Abilities

arXiv - AI 3 min read Article

Summary

This article explores how perceived political bias in large language models (LLMs) can diminish their effectiveness in persuasion, revealing significant implications for their use in public discourse.

Why It Matters

Understanding the impact of perceived bias in LLMs is crucial as these technologies become more integrated into society. The findings suggest that political perceptions can hinder the ability of LLMs to correct misconceptions and influence public opinion, highlighting the need for developers to address bias concerns to enhance their persuasive capabilities.

Key Takeaways

  • Perceived political bias in LLMs can reduce their persuasive effectiveness by 28%.
  • Participants showed increased resistance and less receptiveness when warned of bias against their political affiliation.
  • The study highlights the importance of political neutrality for LLMs in public discourse.
  • Credibility attacks on LLMs can significantly alter user interactions.
  • Developers must consider perceptions of bias to improve LLM engagement and persuasion.

Computer Science > Computation and Language arXiv:2602.18092 (cs) [Submitted on 20 Feb 2026] Title:Perceived Political Bias in LLMs Reduces Persuasive Abilities Authors:Matthew DiGiuseppe, Joshua Robison View a PDF of the paper titled Perceived Political Bias in LLMs Reduces Persuasive Abilities, by Matthew DiGiuseppe and Joshua Robison View PDF HTML (experimental) Abstract:Conversational AI has been proposed as a scalable way to correct public misconceptions and spread misinformation. Yet its effectiveness may depend on perceptions of its political neutrality. As LLMs enter partisan conflict, elites increasingly portray them as ideologically aligned. We test whether these credibility attacks reduce LLM-based persuasion. In a preregistered U.S. survey experiment (N=2144), participants completed a three-round conversation with ChatGPT about a personally held economic policy misconception. Compared to a neutral control, a short message indicating that the LLM was biased against the respondent's party attenuated persuasion by 28%. Transcript analysis indicates that the warnings alter the interaction: respondents push back more and engage less receptively. These findings suggest that the persuasive impact of conversational AI is politically contingent, constrained by perceptions of partisan alignment. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Computers and Society (cs.CY) Cite as: arXiv:2602.18092 [cs.CL]   (or arXiv:2602.18092v1 [c...

Related Articles

Llms

How LLM sycophancy got the US into the Iran quagmire

submitted by /u/sow_oats [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

Kept hitting ChatGPT and Claude limits during real work. This is the free setup I ended up using

I do a lot of writing and random problem solving for work. Mostly long drafts, edits, and breaking down ideas. Around Jan I kept hitting ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Is ChatGPT changing the way we think too much already?

Back in the day, I got ChatGPT Plus mostly for work and to help me write better and do stuff faster. But now I use it for almost everythi...

Reddit - Artificial Intelligence · 1 min ·
Llms

Will people continue paying for the plans after the honeymoon is over?

I currently pay for Max 20x and the demand at work is so high that I can only get everything I need done because I have access to Claude....

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime