[2507.20997] Modular Delta Merging with Orthogonal Constraints: A Scalable Framework for Continual and Reversible Model Composition
Summary
The paper presents Modular Delta Merging with Orthogonal Constraints (MDM-OC), a framework for scalable and reversible model composition in machine learning, addressing issues like task interference and catastrophic forgetting.
Why It Matters
As machine learning models become increasingly integral to various applications, the ability to continually update and compose models without losing performance or compliance is crucial. MDM-OC offers a solution that enhances model stability and adaptability, which is vital for industries requiring frequent updates and regulatory compliance.
Key Takeaways
- MDM-OC enables interference-free and reversible model composition.
- The framework supports continual integration of new models and structured unmerging for compliance.
- Extensive experiments show MDM-OC outperforms existing methods in accuracy and memory efficiency.
Computer Science > Machine Learning arXiv:2507.20997 (cs) [Submitted on 28 Jul 2025 (v1), last revised 21 Feb 2026 (this version, v3)] Title:Modular Delta Merging with Orthogonal Constraints: A Scalable Framework for Continual and Reversible Model Composition Authors:Haris Khan, Sadia Asif, Shumaila Asif View a PDF of the paper titled Modular Delta Merging with Orthogonal Constraints: A Scalable Framework for Continual and Reversible Model Composition, by Haris Khan and 2 other authors View PDF HTML (experimental) Abstract:In real-world machine learning deployments, models must be continually updated, composed, and when required, selectively undone. However, existing approaches to model merging and continual learning often suffer from task interference, catastrophic forgetting, or lack of reversibility. We propose Modular Delta Merging with Orthogonal Constraints (MDM-OC), a novel framework that enables scalable, interference-free, and reversible composition of fine-tuned models. Each task-specific model is encoded as a delta from a shared base and projected into an orthogonal subspace to eliminate conflict. These projected deltas are then merged via gradient-based optimization to form a unified model that retains performance across tasks. Our approach supports continual integration of new models, structured unmerging for compliance such as GDPR requirements, and model stability via elastic weight consolidation and synthetic replay. Extensive experiments on vision and natu...