[2602.20166] ConceptRM: The Quest to Mitigate Alert Fatigue through Consensus-Based Purity-Driven Data Cleaning for Reflection Modelling
Summary
The paper presents ConceptRM, a novel method aimed at reducing alert fatigue in intelligent agents by improving data cleaning processes for reflection modeling, utilizing consensus-based learning techniques.
Why It Matters
Alert fatigue is a significant issue in AI applications, where users may ignore critical alerts due to overwhelming false notifications. ConceptRM addresses this challenge by enhancing the quality of data used to train reflection models, thus improving user response to genuine alerts and potentially increasing the effectiveness of AI systems.
Key Takeaways
- ConceptRM effectively reduces alert fatigue by improving data cleaning methods.
- The method uses consensus-based learning to identify reliable negative samples from noisy data.
- Experimental results show significant performance improvements over existing models.
- Only minimal expert annotations are needed to achieve high-quality results.
- This approach can be applied across various domains where alert fatigue is a concern.
Computer Science > Computation and Language arXiv:2602.20166 (cs) [Submitted on 9 Feb 2026] Title:ConceptRM: The Quest to Mitigate Alert Fatigue through Consensus-Based Purity-Driven Data Cleaning for Reflection Modelling Authors:Yongda Yu, Lei Zhang, Xinxin Guo, Minghui Yu, Zhengqi Zhuang, Guoping Rong, Haifeng Shen, Zhengfeng Li, Boge Wang, Guoan Zhang, Bangyu Xiang, Xiaobin Xu View a PDF of the paper titled ConceptRM: The Quest to Mitigate Alert Fatigue through Consensus-Based Purity-Driven Data Cleaning for Reflection Modelling, by Yongda Yu and 11 other authors View PDF HTML (experimental) Abstract:In many applications involving intelligent agents, the overwhelming volume of alerts (mostly false) generated by the agents may desensitize users and cause them to overlook critical issues, leading to the so-called ''alert fatigue''. A common strategy is to train a reflection model as a filter to intercept false alerts with labelled data collected from user verification feedback. However, a key challenge is the noisy nature of such data as it is often collected in production environments. As cleaning noise via manual annotation incurs high costs, this paper proposes a novel method ConceptRM for constructing a high-quality corpus to train a reflection model capable of effectively intercepting false alerts. With only a small amount of expert annotations as anchors, ConceptRM creates perturbed datasets with varying noise ratios and utilizes co-teaching to train multiple distin...