[2509.04959] On the Normalization of Confusion Matrices: Methods and Geometric Interpretations

[2509.04959] On the Normalization of Confusion Matrices: Methods and Geometric Interpretations

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2509.04959: On the Normalization of Confusion Matrices: Methods and Geometric Interpretations

Computer Science > Machine Learning arXiv:2509.04959 (cs) [Submitted on 5 Sep 2025 (v1), last revised 30 Mar 2026 (this version, v2)] Title:On the Normalization of Confusion Matrices: Methods and Geometric Interpretations Authors:Johan Erbani, Pierre-Edouard Portier, Elod Egyed-Zsigmond, Sonia Ben Mokhtar, Diana Nurbakova View a PDF of the paper titled On the Normalization of Confusion Matrices: Methods and Geometric Interpretations, by Johan Erbani and 4 other authors View PDF HTML (experimental) Abstract:The confusion matrix is a standard tool for evaluating classifiers by providing insights into class-level errors. In heterogeneous settings, its values are shaped by two main factors: class similarity -- how easily the model confuses two classes -- and distribution bias, arising from skewed distributions in the training and test sets. However, confusion matrix values reflect a mix of both factors, making it difficult to disentangle their individual contributions. To address this, we introduce bistochastic normalization using Iterative Proportional Fitting, a generalization of row and column normalization. Unlike standard normalizations, this method recovers the underlying structure of class similarity. By disentangling error sources, it enables more accurate diagnosis of model behavior and supports more targeted improvements. We also show a correspondence between confusion matrix normalizations and the model's internal class representations. Both standard and bistochasti...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

Machine Learning

Is it actually possible to build a model-agnostic persistent text layer that keeps AI behavior stable?

Is it actually possible to define a persistent, model-agnostic text-based layer (loaded with the model each time) that keeps an AI system...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Are gamers being used as free labeling labor? The rise of "Simulators" that look like AI training grounds [D]

Hey everyone, I’m an AI news curator and editor currently working on a piece about a weird trend I’ve been spotting: technical simulators...

Reddit - Machine Learning · 1 min ·
Machine Learning

Coherence Without Convergence: A New Protocol for Multi-Agent AI

Opening For the past year, most progress in multi-agent AI has followed a familiar pattern: Add more agents. Add more coordination. Watch...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Week 6 AIPass update - answering the top questions from last post (file conflicts, remote models, scale)

Followup to last post with answers to the top questions from the comments. Appreciate everyone who jumped in. The most common one by a mi...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime