[2512.07801] Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support
About this article
Abstract page for arXiv paper 2512.07801: Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support
Computer Science > Computation and Language arXiv:2512.07801 (cs) [Submitted on 8 Dec 2025 (v1), last revised 25 Mar 2026 (this version, v5)] Title:Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support Authors:Raunak Jain View a PDF of the paper titled Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support, by Raunak Jain View PDF HTML (experimental) Abstract:LLM-based agents are increasingly deployed for expert decision support, yet human-AI teams in high-stakes settings do not yet reliably outperform the best individual. We argue this complementarity gap reflects a fundamental mismatch: current agents are trained as answer engines, not as partners in the collaborative sensemaking through which experts actually make decisions. Sensemaking (the ability to co-construct causal explanations, surface uncertainties, and adapt goals) is the key capability that current training pipelines do not explicitly develop or evaluate. We propose Collaborative Causal Sensemaking (CCS) as a research agenda to develop this capability from the ground up, spanning new training environments that reward collaborative thinking, representations for shared human-AI mental models, and evaluation centred on trust and complementarity. Taken together, these directions shift MAS research from building oracle-like answer engines to cultivating AI teammates that co-reason with their human partners over the causal structure of...