[2601.15488] Multi-Persona Thinking for Bias Mitigation in Large Language Models
About this article
Abstract page for arXiv paper 2601.15488: Multi-Persona Thinking for Bias Mitigation in Large Language Models
Computer Science > Computation and Language arXiv:2601.15488 (cs) [Submitted on 21 Jan 2026 (v1), last revised 16 Apr 2026 (this version, v2)] Title:Multi-Persona Thinking for Bias Mitigation in Large Language Models Authors:Yuxing Chen, Guoqing Luo, Zijun Wu, Lili Mou View a PDF of the paper titled Multi-Persona Thinking for Bias Mitigation in Large Language Models, by Yuxing Chen and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) exhibit social biases, which can lead to harmful stereotypes and unfair outcomes. We propose \textbf{Multi-Persona Thinking (MPT)}, a simple inference-time framework that reduces social bias by encouraging reasoning from multiple perspectives. MPT guides the model to consider contrasting social identities, such as male and female, together with a neutral viewpoint. These viewpoints then interact through an iterative reasoning process to identify and correct biased judgments. This design transforms the potential weakness of persona assignment into a mechanism for bias mitigation. We evaluate MPT on two widely used bias benchmarks with both open-source and closed-source models across different scales. Results show that MPT achieves lower bias than existing prompting-based methods while maintaining core reasoning ability. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2601.15488 [cs.CL] (or arXiv:2601.15488v2 [cs.CL] for this version) https://doi.org/10.48...