[2510.07517] When Identity Skews Debate: Anonymization for Bias-Reduced Multi-Agent Reasoning

[2510.07517] When Identity Skews Debate: Anonymization for Bias-Reduced Multi-Agent Reasoning

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2510.07517: When Identity Skews Debate: Anonymization for Bias-Reduced Multi-Agent Reasoning

Computer Science > Artificial Intelligence arXiv:2510.07517 (cs) [Submitted on 8 Oct 2025 (v1), last revised 9 Apr 2026 (this version, v5)] Title:When Identity Skews Debate: Anonymization for Bias-Reduced Multi-Agent Reasoning Authors:Hyeong Kyu Choi, Xiaojin Zhu, Sharon Li View a PDF of the paper titled When Identity Skews Debate: Anonymization for Bias-Reduced Multi-Agent Reasoning, by Hyeong Kyu Choi and 2 other authors View PDF HTML (experimental) Abstract:Multi-agent debate (MAD) aims to improve large language model (LLM) reasoning by letting multiple agents exchange answers and then aggregate their opinions. Yet recent studies reveal that agents are not neutral: they are prone to identity-driven sycophancy and self-bias, uncritically adopting a peer's view or stubbornly adhering to their own prior output, undermining the reliability of debate. In this work, we present the first principled framework that joins sycophancy and self-bias to mitigate and quantify identity bias in MAD. First, we formalize the debate dynamics as an identity-weighted Bayesian update process. Second, we propose response anonymization: by removing identity markers from prompts, agents cannot distinguish "self" from "peer", which forces equal weights on agent identity, thereby reducing bias and improving trustworthiness. Third, we define the Identity Bias Coefficient (IBC), a principled bias metric that measures an agent's tendency to follow its peer versus itself. Empirical studies across mult...

Originally published on April 13, 2026. Curated by AI News.

Related Articles

Llms

GPT-5.5 may burn fewer tokens, but it always burns more cash

submitted by /u/NISMO1968 [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
[2605.03213] When Agents Handle Secrets: A Survey of Confidential Computing for Agentic AI
Llms

[2605.03213] When Agents Handle Secrets: A Survey of Confidential Computing for Agentic AI

Abstract page for arXiv paper 2605.03213: When Agents Handle Secrets: A Survey of Confidential Computing for Agentic AI

arXiv - AI · 4 min ·
[2604.17866] Latent Abstraction for Retrieval-Augmented Generation
Llms

[2604.17866] Latent Abstraction for Retrieval-Augmented Generation

Abstract page for arXiv paper 2604.17866: Latent Abstraction for Retrieval-Augmented Generation

arXiv - AI · 4 min ·
[2603.15270] From Documents to Spans: Scalable Supervision for Evidence-Based ICD Coding with LLMs
Llms

[2603.15270] From Documents to Spans: Scalable Supervision for Evidence-Based ICD Coding with LLMs

Abstract page for arXiv paper 2603.15270: From Documents to Spans: Scalable Supervision for Evidence-Based ICD Coding with LLMs

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime