[2505.12506] Unsupervised Representation Learning -- an Invariant Risk Minimization Perspective

[2505.12506] Unsupervised Representation Learning -- an Invariant Risk Minimization Perspective

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2505.12506: Unsupervised Representation Learning -- an Invariant Risk Minimization Perspective

Computer Science > Machine Learning arXiv:2505.12506 (cs) [Submitted on 18 May 2025 (v1), last revised 2 Mar 2026 (this version, v3)] Title:Unsupervised Representation Learning -- an Invariant Risk Minimization Perspective Authors:Yotam Norman, Ron Meir View a PDF of the paper titled Unsupervised Representation Learning -- an Invariant Risk Minimization Perspective, by Yotam Norman and Ron Meir View PDF HTML (experimental) Abstract:We propose a novel unsupervised framework for \emph{Invariant Risk Minimization} (IRM), extending the concept of invariance to settings where labels are unavailable. Traditional IRM methods rely on labeled data to learn representations that are robust to distributional shifts across environments. In contrast, our approach redefines invariance through feature distribution alignment, enabling robust representation learning from unlabeled data. We introduce two methods within this framework: Principal Invariant Component Analysis (PICA), a linear method that extracts invariant directions under Gaussian assumptions, and Variational Invariant Autoencoder (VIAE), a deep generative model that separates environment-invariant and environment-dependent latent factors. Our approach is based on a novel ``unsupervised'' structural causal model and supports environment-conditioned sample-generation and intervention. Empirical evaluations on synthetic dataset, modified versions of MNIST, and CelebA demonstrate the effectiveness of our methods in capturing inva...

Originally published on March 04, 2026. Curated by AI News.

Related Articles

[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization
Machine Learning

[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization

Abstract page for arXiv paper 2603.14267: DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and ...

arXiv - AI · 4 min ·
[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations
Llms

[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations

Abstract page for arXiv paper 2601.22440: AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Value...

arXiv - AI · 4 min ·
[2601.13622] CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language Models
Llms

[2601.13622] CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language Models

Abstract page for arXiv paper 2601.13622: CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language...

arXiv - AI · 3 min ·
[2512.08777] Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages
Llms

[2512.08777] Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages

Abstract page for arXiv paper 2512.08777: Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages

arXiv - AI · 3 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime