[2604.01694] MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning

[2604.01694] MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2604.01694: MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning

Computer Science > Machine Learning arXiv:2604.01694 (cs) [Submitted on 2 Apr 2026] Title:MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning Authors:Sten Rüdiger, Sebastian Raschka View a PDF of the paper titled MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning, by Sten R\"udiger and 1 other authors View PDF HTML (experimental) Abstract:Minor Component Adaptation (MiCA) is a novel parameter-efficient fine-tuning method for large language models that focuses on adapting underutilized subspaces of model representations. Unlike conventional methods such as Low-Rank Adaptation (LoRA), which target dominant subspaces, MiCA leverages Singular Value Decomposition to identify subspaces related to minor singular vectors associated with the least significant singular values and constrains the update of parameters during fine-tuning to those directions. This strategy leads to up to 5.9x improvement in knowledge acquisition under optimized training hyperparameters and a minimal parameter footprint of 6-60% compared to LoRA. These results suggest that constraining adaptation to minor singular directions provides a more efficient and stable mechanism for integrating new knowledge into pre-trained language models. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) Cite as: arXiv:2604.01694 [cs.LG]   (or arXiv:2604.01694v1 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2604.01694 Focus to learn more arXiv-issued...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

I used Jeff Bezos' Day 1 rule with ChatGPT to beat procrastination
Llms

I used Jeff Bezos' Day 1 rule with ChatGPT to beat procrastination

I used Jeff Bezos’ Day 1 rule with ChatGPT to stop procrastinating. These simple prompts helped me start faster, overthink less and get m...

AI Tools & Products · 9 min ·
Llms

ChatGPT and Claude? The Real-World AI Buzz Is Elsewhere

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. ...

AI Tools & Products · 1 min ·
Anthropic investigates unauthorized access to restricted Claude Mythos AI model
Llms

Anthropic investigates unauthorized access to restricted Claude Mythos AI model

Anthropic investigates unauthorized access to restricted Claude Mythos AI model - SiliconANGLE

AI Tools & Products · 5 min ·
Llms

Arc Sentry outperformed LLM Guard 92% vs 70% detection on a head to head benchmark. Here is how it works.

I built Arc Sentry, a pre-generation prompt injection detector for open-weight LLMs. Instead of scanning text for patterns after the fact...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime