[2603.21705] Data-Free Layer-Adaptive Merging via Fisher Information for Long-to-Short Reasoning LLMs
About this article
Abstract page for arXiv paper 2603.21705: Data-Free Layer-Adaptive Merging via Fisher Information for Long-to-Short Reasoning LLMs
Computer Science > Machine Learning arXiv:2603.21705 (cs) [Submitted on 23 Mar 2026] Title:Data-Free Layer-Adaptive Merging via Fisher Information for Long-to-Short Reasoning LLMs Authors:Tian Xia View a PDF of the paper titled Data-Free Layer-Adaptive Merging via Fisher Information for Long-to-Short Reasoning LLMs, by Tian Xia View PDF HTML (experimental) Abstract:Model merging has emerged as a practical approach to combine capabilities of specialized large language models (LLMs) without additional training. In the Long-to-Short (L2S) scenario, merging a base model with a long-chain-of-thought reasoning model aims to preserve reasoning accuracy while reducing output length. Existing methods rely on Task Arithmetic and its variants, which implicitly assume that model outputs vary linearly with the merging coefficient -- an assumption we show is systematically violated in L2S settings. We provide the first theoretical justification for layer-adaptive merging: we prove that merging error is bounded by a term proportional to the per-layer Hessian norm (Proposition~1), and establish that the Fisher Information Matrix (FIM) is a principled, computable proxy for this bound via the Fisher-Hessian equivalence at local optima. Building on this theory, we propose \textbf{FIM-Merging}, which computes diagonal FIM using only random token inputs (no domain-specific calibration data required) and uses it to assign per-layer merging coefficients. On the 7B L2S benchmark, FIM-TIES achieve...