[2603.04204] Beyond Mixtures and Products for Ensemble Aggregation: A Likelihood Perspective on Generalized Means

[2603.04204] Beyond Mixtures and Products for Ensemble Aggregation: A Likelihood Perspective on Generalized Means

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.04204: Beyond Mixtures and Products for Ensemble Aggregation: A Likelihood Perspective on Generalized Means

Statistics > Machine Learning arXiv:2603.04204 (stat) [Submitted on 4 Mar 2026] Title:Beyond Mixtures and Products for Ensemble Aggregation: A Likelihood Perspective on Generalized Means Authors:Raphaël Razafindralambo, Rémy Sun, Frédéric Precioso, Damien Garreau, Pierre-Alexandre Mattei View a PDF of the paper titled Beyond Mixtures and Products for Ensemble Aggregation: A Likelihood Perspective on Generalized Means, by Rapha\"el Razafindralambo and 4 other authors View PDF HTML (experimental) Abstract:Density aggregation is a central problem in machine learning, for instance when combining predictions from a Deep Ensemble. The choice of aggregation remains an open question with two commonly proposed approaches being linear pooling (probability averaging) and geometric pooling (logit averaging). In this work, we address this question by studying the normalized generalized mean of order $r \in \mathbb{R} \cup \{-\infty,+\infty\}$ through the lens of log-likelihood, the standard evaluation criterion in machine learning. This provides a unifying aggregation formalism and shows different optimal configurations for different situations. We show that the regime $r \in [0,1]$ is the only range ensuring systematic improvements relative to individual distributions, thereby providing a principled justification for the reliability and widespread practical use of linear ($r=1$) and geometric ($r=0$) pooling. In contrast, we show that aggregation rules with $r \notin [0,1]$ may fail t...

Originally published on March 05, 2026. Curated by AI News.

Related Articles

Llms

Von Hammerstein’s Ghost: What a Prussian General’s Officer Typology Can Teach Us About AI Misalignment

Greetings all - I've posted mostly in r/claudecode and r/aigamedev a couple of times previously. Working with CC for personal projects re...

Reddit - Artificial Intelligence · 1 min ·
Llms

World models will be the next big thing, bye-bye LLMs

Was at Nvidia's GTC conference recently and honestly, it was one of the most eye-opening events I've attended in a while. There was a lot...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] Got my first offer after months of searching — below posted range, contract-to-hire, and worried it may pause my search. Do I take it?

I could really use some outside perspective. I’m a senior ML/CV engineer in Canada with about 5–6 years across research and industry. Mas...

Reddit - Machine Learning · 1 min ·
Machine Learning

[Research] AI training is bad, so I started an research

Hello, I started researching about AI training Q:Why? R: Because AI training is bad right now. Q: What do you mean its bad? R: Like when ...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime