[2602.16093] Updating Parametric Knowledge with Context Distillation Retains Post-Training Capabilities
Summary
The paper introduces a novel approach called Distillation via Split Contexts (DiSC) for continual knowledge adaptation in large language models (LLMs), addressing the challenge of retaining learned capabilities while integrating new knowledge.
Why It Matters
As LLMs become integral in various applications, the ability to update their knowledge without losing previously acquired skills is crucial. DiSC offers a solution that enhances the adaptability of LLMs, making them more effective in dynamic environments.
Key Takeaways
- DiSC allows LLMs to learn new knowledge while minimizing forgetting of prior skills.
- The method efficiently applies context-distillation without explicit generation steps.
- Experiments show DiSC outperforms existing methods in balancing new knowledge acquisition and retention of previous capabilities.
Computer Science > Computation and Language arXiv:2602.16093 (cs) [Submitted on 17 Feb 2026] Title:Updating Parametric Knowledge with Context Distillation Retains Post-Training Capabilities Authors:Shankar Padmanabhan, Mustafa Omer Gul, Tanya Goyal View a PDF of the paper titled Updating Parametric Knowledge with Context Distillation Retains Post-Training Capabilities, by Shankar Padmanabhan and 2 other authors View PDF Abstract:Post-training endows pretrained LLMs with a variety of desirable skills, including instruction-following, reasoning, and others. However, these post-trained LLMs only encode knowledge up to a cut-off date, necessitating continual adaptation. Unfortunately, existing solutions cannot simultaneously learn new knowledge from an adaptation document corpora and mitigate the forgetting of earlier learned capabilities. To address this, we introduce Distillation via Split Contexts (DiSC), a simple context-distillation based approach for continual knowledge adaptation. \methodname~derives student and teacher distributions by conditioning on distinct segments of the training example and minimizes the KL divergence between the shared tokens. This allows us to efficiently apply context-distillation without requiring explicit generation steps during training. We run experiments on four post-trained models and two adaptation domains. Compared to prior finetuning and distillation methods for continual adaptation, DiSC consistently reports the best trade-off betwee...