[2602.19788] Bayesian Meta-Learning with Expert Feedback for Task-Shift Adaptation through Causal Embeddings

[2602.19788] Bayesian Meta-Learning with Expert Feedback for Task-Shift Adaptation through Causal Embeddings

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a novel Bayesian meta-learning approach that utilizes expert feedback and causal embeddings to enhance task-shift adaptation, addressing the challenge of negative transfer in machine learning.

Why It Matters

The research is significant as it tackles a common issue in machine learning where models struggle to adapt to new tasks that differ from their training data. By leveraging causal embeddings and expert feedback, this method aims to improve the robustness and applicability of machine learning models in real-world scenarios, particularly in clinical settings.

Key Takeaways

  • Introduces a causally-aware Bayesian meta-learning method.
  • Addresses negative transfer by conditioning on causal task embeddings.
  • Demonstrates improved adaptation in both simulations and real-world clinical predictions.

Computer Science > Machine Learning arXiv:2602.19788 (cs) [Submitted on 23 Feb 2026] Title:Bayesian Meta-Learning with Expert Feedback for Task-Shift Adaptation through Causal Embeddings Authors:Lotta Mäkinen, Jorge Loría, Samuel Kaski View a PDF of the paper titled Bayesian Meta-Learning with Expert Feedback for Task-Shift Adaptation through Causal Embeddings, by Lotta M\"akinen and 2 other authors View PDF HTML (experimental) Abstract:Meta-learning methods perform well on new within-distribution tasks but often fail when adapting to out-of-distribution target tasks, where transfer from source tasks can induce negative transfer. We propose a causally-aware Bayesian meta-learning method, by conditioning task-specific priors on precomputed latent causal task embeddings, enabling transfer based on mechanistic similarity rather than spurious correlations. Our approach explicitly considers realistic deployment settings where access to target-task data is limited, and adaptation relies on noisy (expert-provided) pairwise judgments of causal similarity between source and target tasks. We provide a theoretical analysis showing that conditioning on causal embeddings controls prior mismatch and mitigates negative transfer under task shift. Empirically, we demonstrate reductions in negative transfer and improved out-of-distribution adaptation in both controlled simulations and a large-scale real-world clinical prediction setting for cross-disease transfer, where causal embeddings al...

Related Articles

Nlp

Anyone else feel like AI security is being figured out in production right now?

I’ve been digging into AI security incident data from 2025 into this year, and it feels like something isn’t being talked about enough ou...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] ICML 2026 Average Score

Hi all, I’m curious about the current review dynamics for ICML 2026, especially after the rebuttal phase. For those who are reviewers (or...

Reddit - Machine Learning · 1 min ·
Apple’s best product in its first 50 years | The Verge
Nlp

Apple’s best product in its first 50 years | The Verge

From the Macintosh to the iPhone to the iMac to the iPod, it’s hard to pick a best Apple product ever. But we tried to do so anyway.

The Verge - AI · 4 min ·
Nlp

[D] Is lossy compression acceptable for conversational agent memory? Every system today uses knowledge graph triples — here's why I think that's wrong.

Been thinking about this and want to know if others have hit the same issue. The dominant approach for agent memory (Mem0, Zep, most RAG ...

Reddit - Machine Learning · 1 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime