[2604.01694] MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning
About this article
Abstract page for arXiv paper 2604.01694: MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning
Computer Science > Machine Learning arXiv:2604.01694 (cs) [Submitted on 2 Apr 2026] Title:MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning Authors:Sten Rüdiger, Sebastian Raschka View a PDF of the paper titled MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning, by Sten R\"udiger and 1 other authors View PDF HTML (experimental) Abstract:Minor Component Adaptation (MiCA) is a novel parameter-efficient fine-tuning method for large language models that focuses on adapting underutilized subspaces of model representations. Unlike conventional methods such as Low-Rank Adaptation (LoRA), which target dominant subspaces, MiCA leverages Singular Value Decomposition to identify subspaces related to minor singular vectors associated with the least significant singular values and constrains the update of parameters during fine-tuning to those directions. This strategy leads to up to 5.9x improvement in knowledge acquisition under optimized training hyperparameters and a minimal parameter footprint of 6-60% compared to LoRA. These results suggest that constraining adaptation to minor singular directions provides a more efficient and stable mechanism for integrating new knowledge into pre-trained language models. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) Cite as: arXiv:2604.01694 [cs.LG] (or arXiv:2604.01694v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2604.01694 Focus to learn more arXiv-issued...