[2603.28360] CoE: Collaborative Entropy for Uncertainty Quantification in Agentic Multi-LLM Systems
About this article
Abstract page for arXiv paper 2603.28360: CoE: Collaborative Entropy for Uncertainty Quantification in Agentic Multi-LLM Systems
Computer Science > Artificial Intelligence arXiv:2603.28360 (cs) [Submitted on 30 Mar 2026] Title:CoE: Collaborative Entropy for Uncertainty Quantification in Agentic Multi-LLM Systems Authors:Kangkang Sun, Jun Wu, Jianhua Li, Minyi Guo, Xiuzhen Che, Jianwei Huang View a PDF of the paper titled CoE: Collaborative Entropy for Uncertainty Quantification in Agentic Multi-LLM Systems, by Kangkang Sun and 4 other authors View PDF HTML (experimental) Abstract:Uncertainty estimation in multi-LLM systems remains largely single-model-centric: existing methods quantify uncertainty within each model but do not adequately capture semantic disagreement across models. To address this gap, we propose Collaborative Entropy (CoE), a unified information-theoretic metric for semantic uncertainty in multi-LLM collaboration. CoE is defined on a shared semantic cluster space and combines two components: intra-model semantic entropy and inter-model divergence to the ensemble mean. CoE is not a weighted ensemble predictor; it is a system-level uncertainty measure that characterizes collaborative confidence and disagreement. We analyze several core properties of CoE, including non-negativity, zero-value certainty under perfect semantic consensus, and the behavior of CoE when individual models collapse to delta distributions. These results clarify when reducing per-model uncertainty is sufficient and when residual inter-model disagreement remains. We also present a simple CoE-guided, training-free ...