[2505.11235] Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation

[2505.11235] Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation

arXiv - Machine Learning 4 min read Article

Summary

The paper presents Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation (PSOFT), a method that enhances parameter-efficient fine-tuning by preserving semantic representations while improving expressiveness and efficiency across various tasks.

Why It Matters

As large models become increasingly prevalent, efficient fine-tuning methods are crucial for adapting these models to specific tasks without excessive computational costs. PSOFT addresses the limitations of existing methods by maintaining semantic integrity while optimizing performance, making it a significant contribution to the field of machine learning.

Key Takeaways

  • PSOFT confines orthogonal transformations to the principal subspace of pre-trained weights.
  • The method enhances adaptability by gradually relaxing orthogonality during training.
  • Extensive experiments demonstrate PSOFT's effectiveness across 35 NLP and CV tasks.
  • PSOFT achieves a balance between semantic preservation and computational efficiency.
  • The code for PSOFT is publicly available, promoting further research and application.

Computer Science > Machine Learning arXiv:2505.11235 (cs) [Submitted on 16 May 2025 (v1), last revised 19 Feb 2026 (this version, v3)] Title:Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation Authors:Fei Wu, Jia Hu, Geyong Min, Shiqiang Wang View a PDF of the paper titled Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation, by Fei Wu and 3 other authors View PDF HTML (experimental) Abstract:Driven by the rapid growth of model parameters, parameter-efficient fine-tuning (PEFT) has become essential for adapting large models to diverse downstream tasks under constrained computational resources. Within this paradigm, orthogonal fine-tuning and its variants preserve semantic representations of pre-trained models, but struggle to achieve both expressiveness and efficiency in terms of parameter counts, memory, and computation. To overcome this limitation, we propose efficient Orthogonal Fine-Tuning with Principal Subspace adaptation (PSOFT), which confines orthogonal transformations to the principal subspace of pre-trained weights. Specifically, PSOFT constructs this subspace via matrix decomposition to enable compatible transformations with higher effective rank, establishes a theoretical condition that strictly maintains the geometry of this subspace for essential semantic preservation, and introduces efficient tunable vectors that gradually relax orthogonality during training to enhance adaptability. Extensive experiments on 35 NLP and CV tasks...

Related Articles

Llms

World models will be the next big thing, bye-bye LLMs

Was at Nvidia's GTC conference recently and honestly, it was one of the most eye-opening events I've attended in a while. There was a lot...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] Got my first offer after months of searching — below posted range, contract-to-hire, and worried it may pause my search. Do I take it?

I could really use some outside perspective. I’m a senior ML/CV engineer in Canada with about 5–6 years across research and industry. Mas...

Reddit - Machine Learning · 1 min ·
Machine Learning

[Research] AI training is bad, so I started an research

Hello, I started researching about AI training Q:Why? R: Because AI training is bad right now. Q: What do you mean its bad? R: Like when ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] Unix philosophy for ML pipelines: modular, swappable stages with typed contracts

We built an open-source prototype that applies Unix philosophy to retrieval pipelines. Each stage (PII redaction, chunking, dedup, embedd...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime