[2508.04899] Honest and Reliable Evaluation and Expert Equivalence Testing of Automated Neonatal Seizure Detection
About this article
Abstract page for arXiv paper 2508.04899: Honest and Reliable Evaluation and Expert Equivalence Testing of Automated Neonatal Seizure Detection
Computer Science > Machine Learning arXiv:2508.04899 (cs) [Submitted on 6 Aug 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Honest and Reliable Evaluation and Expert Equivalence Testing of Automated Neonatal Seizure Detection Authors:Jovana Kljajic, John M. O'Toole, Robert Hogan, Tamara Skoric View a PDF of the paper titled Honest and Reliable Evaluation and Expert Equivalence Testing of Automated Neonatal Seizure Detection, by Jovana Kljajic and 3 other authors View PDF HTML (experimental) Abstract:Reliable evaluation of machine learning models for neonatal seizure detection is critical for clinical adoption. Current practices often rely on inconsistent and biased metrics, hindering model comparability and interpretability. Expert-level claims about AI performance are frequently made without rigorous validation, raising concerns about their reliability. This study aims to systematically evaluate common performance metrics and propose best practices tailored to the specific challenges of neonatal seizure detection. Using real and synthetic seizure annotations, we assessed standard performance metrics, consensus strategies, and human-expert level equivalence tests under varying class imbalance, inter-rater agreement, and number of raters. Matthews and Pearson's correlation coefficients outperformed the area under the curve in reflecting performance under class imbalance. Consensus types are sensitive to the number of raters and agreement level among them. Amo...