[2404.15390] Remedying uncertainty representations in visual inference through Explaining-Away Variational Autoencoders
About this article
Abstract page for arXiv paper 2404.15390: Remedying uncertainty representations in visual inference through Explaining-Away Variational Autoencoders
Computer Science > Machine Learning arXiv:2404.15390 (cs) [Submitted on 23 Apr 2024 (v1), last revised 30 Mar 2026 (this version, v3)] Title:Remedying uncertainty representations in visual inference through Explaining-Away Variational Autoencoders Authors:Josefina Catoni, Domonkos Martos, Ferenc Csikor, Enzo Ferrante, Diego H. Milone, Balázs Meszéna, Gergő Orbán, Rodrigo Echeveste View a PDF of the paper titled Remedying uncertainty representations in visual inference through Explaining-Away Variational Autoencoders, by Josefina Catoni and 6 other authors View PDF HTML (experimental) Abstract:Optimal computations under uncertainty require an adequate probabilistic representation about beliefs. Deep generative models, and specifically Variational Autoencoders (VAEs), have the potential to meet this demand by building latent representations that learn to associate uncertainties with inferences while avoiding their characteristic intractable computations. Yet, we show that it is precisely uncertainty representation that suffers from inconsistencies under an array of relevant computer vision conditions: contrast-dependent computations, image corruption, out-of-distribution detection. Drawing inspiration from classical computer vision, we present a principled extension to the standard VAE by introducing a simple yet powerful inductive bias through a global scaling latent variable, which we call the Explaining-Away VAE (EA-VAE). By applying EA-VAEs to a spectrum of computer visi...