[2506.01897] MLorc: Momentum Low-rank Compression for Memory Efficient Large Language Model Adaptation

[2506.01897] MLorc: Momentum Low-rank Compression for Memory Efficient Large Language Model Adaptation

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2506.01897: MLorc: Momentum Low-rank Compression for Memory Efficient Large Language Model Adaptation

Computer Science > Machine Learning arXiv:2506.01897 (cs) [Submitted on 2 Jun 2025 (v1), last revised 6 Apr 2026 (this version, v4)] Title:MLorc: Momentum Low-rank Compression for Memory Efficient Large Language Model Adaptation Authors:Wei Shen, Zhang Yaxiang, Minhui Huang, Mengfan Xu, Jiawei Zhang, Cong Shen View a PDF of the paper titled MLorc: Momentum Low-rank Compression for Memory Efficient Large Language Model Adaptation, by Wei Shen and 5 other authors View PDF HTML (experimental) Abstract:With increasing size of large language models (LLMs), full-parameter fine-tuning imposes substantial memory demands. To alleviate this, we propose a novel memory-efficient training paradigm called Momentum Low-rank compression (MLorc). The key idea of MLorc is to compress and reconstruct the momentum of matrix parameters during training to reduce memory consumption. Compared to LoRA, MLorc avoids enforcing a fixed-rank constraint on weight update matrices and thus enables full-parameter learning. Compared to GaLore, MLorc directly compress the momentum rather than gradients, thereby better preserving the training dynamics of full-parameter fine-tuning. We provide a theoretical guarantee for its convergence under mild assumptions. Empirically, MLorc consistently outperforms other memory-efficient training methods, matches or even exceeds the performance of full fine-tuning at small ranks (e.g., $r=4$), and generalizes well across different optimizers, all while not compromising t...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Associative memory system for LLMs that learns during inference [P]

I've been working on MDA (Modular Dynamic Architecture), an online associative memory system for LLMs. Here's what I learned building it....

Reddit - Machine Learning · 1 min ·
Llms

Things I got wrong building a confidence evaluator for local LLMs [D]

I've been building **Autodidact**, a local-first AI agent framework. The central piece is a **confidence evaluator** - something that dec...

Reddit - Machine Learning · 1 min ·
Llms

I’m convinced 90% of you building "AI Agents" are just burning money on proxy providers. [D]

Seriously, I just audited my stack and realized I’m spending more on rotating residential proxies than I am on the actual Claude and Open...

Reddit - Machine Learning · 1 min ·
Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime