[2510.04682] TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA
About this article
Abstract page for arXiv paper 2510.04682: TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA
Computer Science > Computation and Language arXiv:2510.04682 (cs) [Submitted on 6 Oct 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA Authors:Chanjoo Jung, Jaehyung Kim View a PDF of the paper titled TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA, by Chanjoo Jung and 1 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are widely applied in real world scenarios, yet fine-tuning them comes with significant computational and storage costs. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA mitigate these costs; however, the adapted parameters are dependent on the base model and cannot be transferred across different backbones. One way to address this issue is through knowledge distillation, but its effectiveness inherently depends on training data. Recent work such as TransLoRA avoids this by generating synthetic data; nevertheless, this adds complexity since it requires training an additional discriminator model. In this paper, we propose TiTok, a new framework that enables effective LoRA Transplantation through Token-level knowledge transfer. Specifically, TiTok captures task-relevant information through a token-wise contrastive excess between a source model with and without LoRA. This excess highlights informative tokens and enables selective filtering of synthetic data, all without additional models or overh...