[2510.04850] Detecting Distillation Data from Reasoning Models
About this article
Abstract page for arXiv paper 2510.04850: Detecting Distillation Data from Reasoning Models
Computer Science > Computation and Language arXiv:2510.04850 (cs) [Submitted on 6 Oct 2025 (v1), last revised 8 May 2026 (this version, v3)] Title:Detecting Distillation Data from Reasoning Models Authors:Hengxiang Zhang, Hyeong Kyu Choi, Sharon Li, Hongxin Wei View a PDF of the paper titled Detecting Distillation Data from Reasoning Models, by Hengxiang Zhang and 3 other authors View PDF HTML (experimental) Abstract:Reasoning distillation has emerged as a prevailing paradigm for transferring reasoning capabilities from large reasoning models to small language models. Yet, reasoning distillation risks data contamination: benchmark data may inadvertently be included in the distillation data, thereby inflating model performance metrics. In this work, we formally define the distillation data detection task, which determines whether a given question is included in the model's distillation data. The unique challenge of this task lies in the partial availability of distillation data. To address this, we propose Token Probability Deviation (TPD), a detection method that leverages the probability patterns of output tokens generated by the model instead of input tokens. Our method is motivated by the observation that seen questions tend to elicit more near-deterministic tokens generated by the models than unseen ones. Our TPD score is thus designed to quantify the token-level deviation of generated tokens from a high-confidence reference probability. Consequently, seen questions ca...