[2602.18749] Federated Reasoning Distillation Framework with Model Learnability-Aware Data Allocation
Summary
The paper presents the LaDa framework, addressing the challenges of data allocation in federated learning by enhancing model learnability and facilitating effective knowledge transfer between large and small language models.
Why It Matters
This research is significant as it tackles the critical issue of data allocation in federated learning systems, which is essential for improving the collaboration between large and small language models. By addressing the learnability gap, it enhances the efficiency of knowledge transfer, potentially leading to better performance in AI applications.
Key Takeaways
- Introduces LaDa, a framework for federated reasoning distillation.
- Addresses the bidirectional model learnability gap in data allocation.
- Implements a model learnability-aware data filter for effective knowledge transfer.
- Enhances domain adaptation in reasoning through contrastive distillation.
- Operates as a plug-in for existing collaboration frameworks.
Computer Science > Artificial Intelligence arXiv:2602.18749 (cs) [Submitted on 21 Feb 2026] Title:Federated Reasoning Distillation Framework with Model Learnability-Aware Data Allocation Authors:Wei Guo, Siyuan Lu, Xiangdong Ran, Yiqi Tong, Yikun Ban, Zelong Xu, Jing Fan, Zixuan Huang, Xiao Zhang, Zhaojun Hu, Fuzhen Zhuang View a PDF of the paper titled Federated Reasoning Distillation Framework with Model Learnability-Aware Data Allocation, by Wei Guo and 10 other authors View PDF HTML (experimental) Abstract:Data allocation plays a critical role in federated large language model (LLM) and small language models (SLMs) reasoning collaboration. Nevertheless, existing data allocation methods fail to address an under-explored challenge in collaboration: bidirectional model learnability gap, where client-side SLMs cannot identify high-reward samples matching their learnability constraints for effective knowledge transfer from LLMs, while LLMs struggle to select samples contributing novel knowledge beyond their existing data. Furthermore, these collaboration frameworks face another key challenge: domain-agnostic reasoning transfer, where existing reasoning transfer methods fail to flexibly adapt to the local domain data, preventing SLMs from effectively acquiring step-by-step reasoning abilities within from general LLM. To address these challenges, we propose LaDa, a federated reasoning distillation framework with model learnability-aware data allocation. It introduces a model ...