[2602.18092] Perceived Political Bias in LLMs Reduces Persuasive Abilities
Summary
This article explores how perceived political bias in large language models (LLMs) can diminish their effectiveness in persuasion, revealing significant implications for their use in public discourse.
Why It Matters
Understanding the impact of perceived bias in LLMs is crucial as these technologies become more integrated into society. The findings suggest that political perceptions can hinder the ability of LLMs to correct misconceptions and influence public opinion, highlighting the need for developers to address bias concerns to enhance their persuasive capabilities.
Key Takeaways
- Perceived political bias in LLMs can reduce their persuasive effectiveness by 28%.
- Participants showed increased resistance and less receptiveness when warned of bias against their political affiliation.
- The study highlights the importance of political neutrality for LLMs in public discourse.
- Credibility attacks on LLMs can significantly alter user interactions.
- Developers must consider perceptions of bias to improve LLM engagement and persuasion.
Computer Science > Computation and Language arXiv:2602.18092 (cs) [Submitted on 20 Feb 2026] Title:Perceived Political Bias in LLMs Reduces Persuasive Abilities Authors:Matthew DiGiuseppe, Joshua Robison View a PDF of the paper titled Perceived Political Bias in LLMs Reduces Persuasive Abilities, by Matthew DiGiuseppe and Joshua Robison View PDF HTML (experimental) Abstract:Conversational AI has been proposed as a scalable way to correct public misconceptions and spread misinformation. Yet its effectiveness may depend on perceptions of its political neutrality. As LLMs enter partisan conflict, elites increasingly portray them as ideologically aligned. We test whether these credibility attacks reduce LLM-based persuasion. In a preregistered U.S. survey experiment (N=2144), participants completed a three-round conversation with ChatGPT about a personally held economic policy misconception. Compared to a neutral control, a short message indicating that the LLM was biased against the respondent's party attenuated persuasion by 28%. Transcript analysis indicates that the warnings alter the interaction: respondents push back more and engage less receptively. These findings suggest that the persuasive impact of conversational AI is politically contingent, constrained by perceptions of partisan alignment. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Computers and Society (cs.CY) Cite as: arXiv:2602.18092 [cs.CL] (or arXiv:2602.18092v1 [c...