[2603.04859] Osmosis Distillation: Model Hijacking with the Fewest Samples
About this article
Abstract page for arXiv paper 2603.04859: Osmosis Distillation: Model Hijacking with the Fewest Samples
Computer Science > Cryptography and Security arXiv:2603.04859 (cs) [Submitted on 5 Mar 2026] Title:Osmosis Distillation: Model Hijacking with the Fewest Samples Authors:Yuchen Shi, Huajie Chen, Heng Xu, Zhiquan Liu, Jialiang Shen, Chi Liu, Shuai Zhou, Tianqing Zhu, Wanlei Zhou View a PDF of the paper titled Osmosis Distillation: Model Hijacking with the Fewest Samples, by Yuchen Shi and 7 other authors View PDF HTML (experimental) Abstract:Transfer learning is devised to leverage knowledge from pre-trained models to solve new tasks with limited data and computational resources. Meanwhile, dataset distillation has emerged to synthesize a compact dataset that preserves critical information from the original large dataset. Therefore, a combination of transfer learning and dataset distillation offers promising performance in evaluations. However, a non-negligible security threat remains undiscovered in transfer learning using synthetic datasets generated by dataset distillation methods, where an adversary can perform a model hijacking attack with only a few poisoned samples in the synthetic dataset. To reveal this threat, we propose Osmosis Distillation (OD) attack, a novel model hijacking strategy that targets deep learning models using the fewest samples. Comprehensive evaluations on various datasets demonstrate that the OD attack attains high attack success rates in hidden tasks while preserving high model utility in original tasks. Furthermore, the distilled osmosis set en...