[2602.14251] Multi-Agent Debate: A Unified Agentic Framework for Tabular Anomaly Detection
Summary
The paper presents the Multi-Agent Debate (MAD) framework for tabular anomaly detection, leveraging multiple ML detectors and a large language model to enhance robustness and interpretability in anomaly scoring.
Why It Matters
This research is significant as it addresses the limitations of traditional anomaly detection methods by introducing a collaborative approach that utilizes model disagreement as a valuable signal. The MAD framework aims to improve performance in challenging scenarios, such as distribution shifts and rare anomalies, which are common in real-world applications.
Key Takeaways
- MAD framework integrates multiple ML detectors to enhance anomaly detection.
- Utilizes a large language model to provide structured evidence and critique.
- Improves robustness against distribution shifts and rare anomalies.
- Offers auditable debate traces for better interpretability.
- Establishes regret guarantees for synthesized losses, enhancing reliability.
Computer Science > Machine Learning arXiv:2602.14251 (cs) [Submitted on 15 Feb 2026] Title:Multi-Agent Debate: A Unified Agentic Framework for Tabular Anomaly Detection Authors:Pinqiao Wang, Sheng Li View a PDF of the paper titled Multi-Agent Debate: A Unified Agentic Framework for Tabular Anomaly Detection, by Pinqiao Wang and 1 other authors View PDF HTML (experimental) Abstract:Tabular anomaly detection is often handled by single detectors or static ensembles, even though strong performance on tabular data typically comes from heterogeneous model families (e.g., tree ensembles, deep tabular networks, and tabular foundation models) that frequently disagree under distribution shift, missingness, and rare-anomaly regimes. We propose MAD, a Multi-Agent Debating framework that treats this disagreement as a first-class signal and resolves it through a mathematically grounded coordination layer. Each agent is a machine learning (ML)-based detector that produces a normalized anomaly score, confidence, and structured evidence, augmented by a large language model (LLM)-based critic. A coordinator converts these messages into bounded per-agent losses and updates agent influence via an exponentiated-gradient rule, yielding both a final debated anomaly score and an auditable debate trace. MAD is a unified agentic framework that can recover existing approaches, such as mixture-of-experts gating and learning-with-expert-advice aggregation, by restricting the message space and synthesi...