[2602.18916] Adaptive Collaboration of Arena-Based Argumentative LLMs for Explainable and Contestable Legal Reasoning
Summary
The paper presents Adaptive Collaboration of Arena-Based Argumentative LLMs (ACAL), a framework designed for explainable and contestable legal reasoning, enhancing decision justification through structured argumentation and user intervention.
Why It Matters
This research addresses the critical need for transparency and contestability in legal AI systems, which is essential for building trust and ensuring accountability in automated decision-making processes. By integrating human oversight, it aims to improve the reliability of legal reasoning applications.
Key Takeaways
- ACAL framework enhances legal reasoning by integrating multi-agent collaboration.
- Supports a Human-in-the-Loop (HITL) approach for user intervention.
- Empirical evaluations show ACAL outperforms existing models in structured transparency.
- Utilizes a clash resolution mechanism for adjudicating conflicting claims.
- Focuses on balancing predictive performance with explainability.
Computer Science > Multiagent Systems arXiv:2602.18916 (cs) [Submitted on 21 Feb 2026] Title:Adaptive Collaboration of Arena-Based Argumentative LLMs for Explainable and Contestable Legal Reasoning Authors:Hoang-Loc Cao, Phuc Ho, Truong Thanh Hung Nguyen, Phuc Truong Loc Nguyen, Dinh Thien Loc Nguyen, Hung Cao View a PDF of the paper titled Adaptive Collaboration of Arena-Based Argumentative LLMs for Explainable and Contestable Legal Reasoning, by Hoang-Loc Cao and 5 other authors View PDF HTML (experimental) Abstract:Legal reasoning requires not only high accuracy but also the ability to justify decisions through verifiable and contestable arguments. However, existing Large Language Model (LLM) approaches, such as Chain-of-Thought (CoT) and Retrieval-Augmented Generation (RAG), often produce unstructured explanations that lack a formal mechanism for verification or user intervention. To address this limitation, we propose Adaptive Collaboration of Argumentative LLMs (ACAL), a neuro-symbolic framework that integrates adaptive multi-agent collaboration with an Arena-based Quantitative Bipolar Argumentation Framework (A-QBAF). ACAL dynamically deploys expert agent teams to construct arguments, employs a clash resolution mechanism to adjudicate conflicting claims, and utilizes uncertainty-aware escalation for borderline cases. Crucially, our framework supports a Human-in-the-Loop (HITL) contestability workflow, enabling users to directly audit and modify the underlying reason...