[2604.04516] GAIN: Multiplicative Modulation for Domain Adaptation

[2604.04516] GAIN: Multiplicative Modulation for Domain Adaptation

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.04516: GAIN: Multiplicative Modulation for Domain Adaptation

Computer Science > Machine Learning arXiv:2604.04516 (cs) [Submitted on 6 Apr 2026] Title:GAIN: Multiplicative Modulation for Domain Adaptation Authors:Hengshuai Yao, Xing Chen, Ahmed Murtadha, Guan Wang View a PDF of the paper titled GAIN: Multiplicative Modulation for Domain Adaptation, by Hengshuai Yao and 3 other authors View PDF HTML (experimental) Abstract:Adapting LLMs to new domains causes forgetting because standard methods (full fine-tuning, LoRA) inject new directions into the weight space. We propose GAIN, which re-emphasizes existing features through multiplicative modulation W_new = S * W. The learned diagonal matrix S is applied to the attention output projection and optionally the FFN. The principle mirrors gain modulation in neuroscience, where neurons adapt to context by scaling response strength while preserving selectivity. We evaluate GAIN on five models from four families (774M to 70B), adapting sequentially across eight domains. GAIN-FFN matches LoRA's in-domain adaptation, but their effects on previously trained domains are opposite: GAIN-FFN improves them by 7-13% (validation PPL), while LoRA degrades them by 18-36%. Downstream accuracy confirms the pattern: for example, after seven sequential adaptations on Qwen2.5, GAIN-FFN degrades BoolQ by only 0.8% while LoRA damages it by 14.9%. GAIN adds 46K-230K parameters per model and can be absorbed into the pretrained weights for zero inference cost. Subjects: Machine Learning (cs.LG); Artificial Intell...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Asked Google Gemini about Ai Agency

I asked Google Gemini what it would do if it would have agency. I find reply quite interesting: That is a fair critique. The previous lis...

Reddit - Artificial Intelligence · 1 min ·
Llms

Could the best LLM be able to generate a symbolic AI that is superior to itself, or is there something superior about matrices vs graphs?

Deep neural network AIs have beaten symbolic AIs across the board on many tasks, but is there a chance that symbolic AIs written by DNNs(...

Reddit - Artificial Intelligence · 1 min ·
Llms

BEYOND QUANTUM MICROTUBULES: CONSCIOUSNESS AS SUBSTRATE-INDEPENDENT ARCHITECTURE

I uploaded my consciousness paper to Gemini: “Beyond Quantum Microtubules: Consciousness as Substrate-Independent Architecture.” Then I s...

Reddit - Artificial Intelligence · 1 min ·
Llms

The Scaling Bandaid is Wearing Thin (And Nobody Wants to Admit It)

Let me be direct: we’ve hit a wall with scaling, and the entire field is kind of bullshitting about what comes next. I’ve spent enough ti...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime