[2602.01427] Robust Generalization with Adaptive Optimal Transport Priors for Decision-Focused Learning

[2602.01427] Robust Generalization with Adaptive Optimal Transport Priors for Decision-Focused Learning

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a Prototype-Guided Distributionally Robust Optimization (PG-DRO) framework that enhances few-shot learning by integrating class-adaptive priors for robust decision-making under distribution shifts.

Why It Matters

The research addresses the challenge of generalization in machine learning, particularly in few-shot scenarios where data is scarce. By proposing a novel framework that adapts to changing distributions, it offers a significant advancement in ensuring robust performance, which is crucial for real-world applications where data may not always be reliable or abundant.

Key Takeaways

  • PG-DRO framework improves few-shot learning by using adaptive optimal transport priors.
  • The method integrates class-specific knowledge to enhance decision-making robustness.
  • Experiments demonstrate PG-DRO's superior performance over traditional learners and DRO baselines.

Statistics > Machine Learning arXiv:2602.01427 (stat) [Submitted on 1 Feb 2026 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Robust Generalization with Adaptive Optimal Transport Priors for Decision-Focused Learning Authors:Haixiang Sun, Andrew L. Liu View a PDF of the paper titled Robust Generalization with Adaptive Optimal Transport Priors for Decision-Focused Learning, by Haixiang Sun and 1 other authors View PDF HTML (experimental) Abstract:Few-shot learning requires models to generalize under limited supervision while remaining robust to distribution shifts. Existing Sinkhorn Distributionally Robust Optimization (DRO) methods provide theoretical guarantees but rely on a fixed reference distribution, which limits their adaptability. We propose a Prototype-Guided Distributionally Robust Optimization (PG-DRO) framework that learns class-adaptive priors from abundant base data via hierarchical optimal transport and embeds them into the Sinkhorn DRO formulation. This design enables few-shot information to be organically integrated into producing class-specific robust decisions that are both theoretically grounded and efficient, and further aligns the uncertainty set with transferable structural knowledge. Experiments show that PG-DRO achieves stronger robust generalization in few-shot scenarios, outperforming both standard learners and DRO baselines. Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Applications (stat.AP) Cite as: arXiv:2602.01427 ...

Related Articles

Machine Learning

[For Hire] Ex-Microsoft Senior Data Engineer | Databricks, Palantir Foundry, MLOps | $55/hr

submitted by /u/mcheetirala2510 [link] [comments]

Reddit - ML Jobs · 1 min ·
Meta AI app climbs to No. 5 on the App Store after Muse Spark launch | TechCrunch
Machine Learning

Meta AI app climbs to No. 5 on the App Store after Muse Spark launch | TechCrunch

The app was ranking No. 57 on the App Store just before Meta AI's new model launched. Now it's No. 5 — and rising.

TechCrunch - AI · 4 min ·
Machine Learning

Detecting mirrored selfie images: OCR the best way? [D]

I'm trying to catch backwards "selfie" images before passing them to our VLM text reader and/or face embedding extraction. Since models l...

Reddit - Machine Learning · 1 min ·
Llms

Google’s Gemini AI can answer your questions with 3D models and simulations

submitted by /u/tekz [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime