[2511.21437] A Systematic Study of In-the-Wild Model Merging for Large Language Models
About this article
Abstract page for arXiv paper 2511.21437: A Systematic Study of In-the-Wild Model Merging for Large Language Models
Computer Science > Computation and Language arXiv:2511.21437 (cs) [Submitted on 26 Nov 2025 (v1), last revised 29 Mar 2026 (this version, v2)] Title:A Systematic Study of In-the-Wild Model Merging for Large Language Models Authors:Oğuz Kağan Hitit, Leander Girrbach, Zeynep Akata View a PDF of the paper titled A Systematic Study of In-the-Wild Model Merging for Large Language Models, by O\u{g}uz Ka\u{g}an Hitit and 2 other authors View PDF HTML (experimental) Abstract:Model merging combines multiple fine-tuned checkpoints into a single model without additional training, offering an attractive approach to reusing models and efficiently improving performance. However, it remains unclear whether the advantages reported for settings where all merged experts have distinct roles and are tuned on clearly separated tasks also hold in settings where the merged experts do not have clearly distinct roles, but are trained on overlapping or even conflicting objectives. To evaluate this setting, we present a large-scale, systematic evaluation of "in-the-wild" model merging of heterogeneous experts, that may have been trained on overlapping or conflicting objectives. Concretely, we evaluate six state-of-the-art merging methods, including recent subspace methods, across four open-weight LLMs, twelve fine-tuned checkpoints per base model, and sixteen standard LLM benchmarks. Evaluating through standardized benchmarks, we measure both the probability that a model merged from a heterogeneous ...