[2603.04972] Functionality-Oriented LLM Merging on the Fisher--Rao Manifold
About this article
Abstract page for arXiv paper 2603.04972: Functionality-Oriented LLM Merging on the Fisher--Rao Manifold
Computer Science > Machine Learning arXiv:2603.04972 (cs) [Submitted on 5 Mar 2026] Title:Functionality-Oriented LLM Merging on the Fisher--Rao Manifold Authors:Jiayu Wang, Zuojun Ye, Wenpeng Yin View a PDF of the paper titled Functionality-Oriented LLM Merging on the Fisher--Rao Manifold, by Jiayu Wang and 2 other authors View PDF HTML (experimental) Abstract:Weight-space merging aims to combine multiple fine-tuned LLMs into a single model without retraining, yet most existing approaches remain fundamentally parameter-space heuristics. This creates three practical limitations. First, linear averaging, task vectors, and related rules operate on Euclidean coordinates, even though the desired goal is to merge functionality, i.e., predictive behaviors across tasks. Second, when the source checkpoints are farther apart or more heterogeneous, Euclidean blends often trigger representation collapse, manifested as activation variance shrinkage and effective-rank degradation, which sharply degrades accuracy. Third, many geometry-inspired methods are most natural for two-model interpolation and do not extend cleanly to merging N>2 experts with a principled objective. We address these issues by formulating model merging as computing a weighted Karcher mean on the Fisher--Rao manifold, which is locally equivalent to minimizing a KL-based function distance between predictive distributions. We derive a practical fixed-point algorithm using a lightweight spherical proxy that preserves no...