[2601.15488] Multi-Persona Thinking for Bias Mitigation in Large Language Models

[2601.15488] Multi-Persona Thinking for Bias Mitigation in Large Language Models

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2601.15488: Multi-Persona Thinking for Bias Mitigation in Large Language Models

Computer Science > Computation and Language arXiv:2601.15488 (cs) [Submitted on 21 Jan 2026 (v1), last revised 16 Apr 2026 (this version, v2)] Title:Multi-Persona Thinking for Bias Mitigation in Large Language Models Authors:Yuxing Chen, Guoqing Luo, Zijun Wu, Lili Mou View a PDF of the paper titled Multi-Persona Thinking for Bias Mitigation in Large Language Models, by Yuxing Chen and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) exhibit social biases, which can lead to harmful stereotypes and unfair outcomes. We propose \textbf{Multi-Persona Thinking (MPT)}, a simple inference-time framework that reduces social bias by encouraging reasoning from multiple perspectives. MPT guides the model to consider contrasting social identities, such as male and female, together with a neutral viewpoint. These viewpoints then interact through an iterative reasoning process to identify and correct biased judgments. This design transforms the potential weakness of persona assignment into a mechanism for bias mitigation. We evaluate MPT on two widely used bias benchmarks with both open-source and closed-source models across different scales. Results show that MPT achieves lower bias than existing prompting-based methods while maintaining core reasoning ability. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2601.15488 [cs.CL]   (or arXiv:2601.15488v2 [cs.CL] for this version)   https://doi.org/10.48...

Originally published on April 17, 2026. Curated by AI News.

Related Articles

[2603.13683] Preconditioned Test-Time Adaptation for Out-of-Distribution Debiasing in Narrative Generation
Llms

[2603.13683] Preconditioned Test-Time Adaptation for Out-of-Distribution Debiasing in Narrative Generation

Abstract page for arXiv paper 2603.13683: Preconditioned Test-Time Adaptation for Out-of-Distribution Debiasing in Narrative Generation

arXiv - AI · 3 min ·
[2602.03295] POP: Prefill-Only Pruning for Efficient Large Model Inference
Llms

[2602.03295] POP: Prefill-Only Pruning for Efficient Large Model Inference

Abstract page for arXiv paper 2602.03295: POP: Prefill-Only Pruning for Efficient Large Model Inference

arXiv - AI · 4 min ·
[2601.14724] HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding
Llms

[2601.14724] HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding

Abstract page for arXiv paper 2601.14724: HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding

arXiv - AI · 4 min ·
[2601.10120] TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems
Llms

[2601.10120] TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems

Abstract page for arXiv paper 2601.10120: TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems

arXiv - AI · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime