[2604.03254] Is your AI Model Accurate Enough? The Difficult Choices Behind Rigorous AI Development and the EU AI Act
About this article
Abstract page for arXiv paper 2604.03254: Is your AI Model Accurate Enough? The Difficult Choices Behind Rigorous AI Development and the EU AI Act
Computer Science > Computers and Society arXiv:2604.03254 (cs) [Submitted on 11 Mar 2026] Title:Is your AI Model Accurate Enough? The Difficult Choices Behind Rigorous AI Development and the EU AI Act Authors:Lucas G. Uberti-Bona Marin, Bram Rijsbosch, Kristof Meding, Gerasimos Spanakis, Gijs van Dijck, Konrad Kollnig View a PDF of the paper titled Is your AI Model Accurate Enough? The Difficult Choices Behind Rigorous AI Development and the EU AI Act, by Lucas G. Uberti-Bona Marin and 5 other authors View PDF HTML (experimental) Abstract:Technical and legal debates frequently suggest that "accuracy" is an objective, measurable, and purely technical property. We challenge this view, showing that evaluating AI performance fundamentally depends on context-dependent normative decisions. These techno-normative choices are crucial for rigorous AI deployment, as they determine which errors are prioritised, how risks are distributed, and how trade-offs between competing objectives are resolved. This paper provides a legal-technical analysis of the choices that shape how accuracy is defined, measured, and assessed, using the 2024 European Union AI Act -- which mandates an "appropriate level of accuracy" for high-risk systems -- as a primary case study. We identify and analyse four choices central to any robust performance evaluation: (1) selecting metrics, (2) balancing multiple metrics, (3) measuring metrics against representative data, and (4) determining acceptance thresholds. ...