[2412.06014] Post-hoc Probabilistic Vision-Language Models

[2412.06014] Post-hoc Probabilistic Vision-Language Models

arXiv - Machine Learning 3 min read Article

Summary

This article presents a novel approach to uncertainty estimation in vision-language models (VLMs) by proposing a post-hoc method that enhances predictive accuracy without requiring additional training.

Why It Matters

As vision-language models are increasingly deployed in critical applications, understanding and quantifying uncertainty is vital for ensuring reliability and safety. This research offers a significant advancement in the field by providing a method that enhances model interpretability and performance in active learning scenarios.

Key Takeaways

  • Introduces a post-hoc uncertainty estimation method for VLMs.
  • Improves predictive uncertainties and interpretable estimates.
  • Supports sample-efficient active learning strategies.
  • No additional training is required for implementation.
  • Promising implications for safety-critical applications.

Computer Science > Computer Vision and Pattern Recognition arXiv:2412.06014 (cs) [Submitted on 8 Dec 2024 (v1), last revised 13 Feb 2026 (this version, v5)] Title:Post-hoc Probabilistic Vision-Language Models Authors:Anton Baumann, Rui Li, Marcus Klasson, Santeri Mentu, Shyamgopal Karthik, Zeynep Akata, Arno Solin, Martin Trapp View a PDF of the paper titled Post-hoc Probabilistic Vision-Language Models, by Anton Baumann and 7 other authors View PDF Abstract:Vision-language models (VLMs), such as CLIP and SigLIP, have found remarkable success in classification, retrieval, and generative tasks. For this, VLMs deterministically map images and text descriptions to a joint latent space in which their similarity is assessed using the cosine similarity. However, a deterministic mapping of inputs fails to capture uncertainties over concepts arising from domain shifts when used in downstream tasks. In this work, we propose post-hoc uncertainty estimation in VLMs that does not require additional training. Our method leverages a Bayesian posterior approximation over the last layers in VLMs and analytically quantifies uncertainties over cosine similarities. We demonstrate its effectiveness for uncertainty quantification and support set selection in active learning. Compared to baselines, we obtain improved and well-calibrated predictive uncertainties, interpretable uncertainty estimates, and sample-efficient active learning. Our results show promise for safety-critical applications o...

Related Articles

Llms

Have Companies Began Adopting Claude Co-Work at an Enterprise Level?

Hi Guys, My company is considering purchasing the Claude Enterprise plan. The main two constraints are: - Being able to block usage of Cl...

Reddit - Artificial Intelligence · 1 min ·
Llms

What I learned about multi-agent coordination running 9 specialized Claude agents

I've been experimenting with multi-agent AI systems and ended up building something more ambitious than I originally planned: a fully ope...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] The problem with comparing AI memory system benchmarks — different evaluation methods make scores meaningless

I've been reviewing how various AI memory systems evaluate their performance and noticed a fundamental issue with cross-system comparison...

Reddit - Machine Learning · 1 min ·
Shifting to AI model customization is an architectural imperative | MIT Technology Review
Llms

Shifting to AI model customization is an architectural imperative | MIT Technology Review

In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every ...

MIT Technology Review · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime