[2604.01601] Training In-Context and In-Weights Mixtures Via Contrastive Context Sampling

[2604.01601] Training In-Context and In-Weights Mixtures Via Contrastive Context Sampling

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2604.01601: Training In-Context and In-Weights Mixtures Via Contrastive Context Sampling

Computer Science > Machine Learning arXiv:2604.01601 (cs) [Submitted on 2 Apr 2026] Title:Training In-Context and In-Weights Mixtures Via Contrastive Context Sampling Authors:Deeptanshu Malu, Deevyanshu Malu, Aditya Nemiwal, Sunita Sarawagi View a PDF of the paper titled Training In-Context and In-Weights Mixtures Via Contrastive Context Sampling, by Deeptanshu Malu and 3 other authors View PDF HTML (experimental) Abstract:We investigate training strategies that co-develop in-context learning (ICL) and in-weights learning (IWL), and the ability to switch between them based on context relevance. Although current LLMs exhibit both modes, standard task-specific fine-tuning often erodes ICL, motivating IC-Train - fine-tuning with in-context examples. Prior work has shown that emergence of ICL after IC-Train depends on factors such as task diversity and training duration. In this paper we show that the similarity structure between target inputs and context examples also plays an important role. Random context leads to loss of ICL and IWL dominance, while only similar examples in context causes ICL to degenerate to copying labels without regard to relevance. To address this, we propose a simple Contrastive-Context which enforces two types of contrasts: (1) mix of similar and random examples within a context to evolve a correct form of ICL, and (2) varying grades of similarity across contexts to evolve ICL-IWL mixtures. We present insights on the importance of such contrast with ...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

I used Jeff Bezos' Day 1 rule with ChatGPT to beat procrastination
Llms

I used Jeff Bezos' Day 1 rule with ChatGPT to beat procrastination

I used Jeff Bezos’ Day 1 rule with ChatGPT to stop procrastinating. These simple prompts helped me start faster, overthink less and get m...

AI Tools & Products · 9 min ·
Llms

ChatGPT and Claude? The Real-World AI Buzz Is Elsewhere

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. ...

AI Tools & Products · 1 min ·
Anthropic investigates unauthorized access to restricted Claude Mythos AI model
Llms

Anthropic investigates unauthorized access to restricted Claude Mythos AI model

Anthropic investigates unauthorized access to restricted Claude Mythos AI model - SiliconANGLE

AI Tools & Products · 5 min ·
Llms

Arc Sentry outperformed LLM Guard 92% vs 70% detection on a head to head benchmark. Here is how it works.

I built Arc Sentry, a pre-generation prompt injection detector for open-weight LLMs. Instead of scanning text for patterns after the fact...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime