[2511.18945] MIST: Mutual Information Estimation Via Supervised Training

[2511.18945] MIST: Mutual Information Estimation Via Supervised Training

arXiv - Machine Learning 4 min read Article

Summary

The paper presents MIST, a novel approach for estimating mutual information using a neural network trained on a large dataset of synthetic distributions, outperforming traditional methods in efficiency and accuracy.

Why It Matters

This research addresses the limitations of classical mutual information estimators by introducing a fully data-driven method that enhances flexibility and efficiency. The ability to provide well-calibrated uncertainty estimates is crucial for applications in machine learning and information theory, making it a significant contribution to the field.

Key Takeaways

  • MIST uses a neural network to estimate mutual information from synthetic joint distributions.
  • The method provides better performance than classical estimators across various sample sizes and dimensions.
  • Quantile regression loss is employed to output uncertainty estimates, improving reliability.
  • The framework allows for integration into larger learning systems, enhancing its practical applicability.
  • Adaptability to different data modalities through normalizing flows expands its use cases.

Computer Science > Machine Learning arXiv:2511.18945 (cs) [Submitted on 24 Nov 2025 (v1), last revised 20 Feb 2026 (this version, v2)] Title:MIST: Mutual Information Estimation Via Supervised Training Authors:German Gritsai, Megan Richards, Maxime Méloux, Kyunghyun Cho, Maxime Peyrard View a PDF of the paper titled MIST: Mutual Information Estimation Via Supervised Training, by German Gritsai and 4 other authors View PDF HTML (experimental) Abstract:We propose a fully data-driven approach to designing mutual information (MI) estimators. Since any MI estimator is a function of the observed sample from two random variables, we parameterize this function with a neural network (MIST) and train it end-to-end to predict MI values. Training is performed on a large meta-dataset of 625,000 synthetic joint distributions with known ground-truth MI. To handle variable sample sizes and dimensions, we employ a two-dimensional attention scheme ensuring permutation invariance across input samples. To quantify uncertainty, we optimize a quantile regression loss, enabling the estimator to approximate the sampling distribution of MI rather than return a single point estimate. This research program departs from prior work by taking a fully empirical route, trading universal theoretical guarantees for flexibility and efficiency. Empirically, the learned estimators largely outperform classical baselines across sample sizes and dimensions, including on joint distributions unseen during training....

Related Articles

Machine Learning

[HIRING] Machine Learning Evaluation Specialist | Remote | $50/hr

​ We are onboarding domain experts with strong machine learning knowledge to design advanced evaluation tasks for AI systems. About the R...

Reddit - ML Jobs · 1 min ·
Machine Learning

Japan is adopting robotics and physical AI, with a model where startups innovate and corporations provide scale

Physical AI is emerging as one of the next major industrial battlegrounds, with Japan’s push driven more by necessity than anything else....

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

mining hardware doing AI training - is the output actually useful

there's this network that launched recently routing crypto mining hardware toward AI training workloads. miners seem happy with the econo...

Reddit - Artificial Intelligence · 1 min ·
AI is changing how small online sellers decide what to make | MIT Technology Review
Machine Learning

AI is changing how small online sellers decide what to make | MIT Technology Review

Entrepreneurs based in the US are using tools like Alibaba’s Accio to compress weeks of product research and supplier hunting into a sing...

MIT Technology Review · 8 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime