[2509.23886] Towards Understanding Subliminal Learning: When and How Hidden Biases Transfer
About this article
Abstract page for arXiv paper 2509.23886: Towards Understanding Subliminal Learning: When and How Hidden Biases Transfer
Computer Science > Machine Learning arXiv:2509.23886 (cs) [Submitted on 28 Sep 2025 (v1), last revised 5 Mar 2026 (this version, v2)] Title:Towards Understanding Subliminal Learning: When and How Hidden Biases Transfer Authors:Simon Schrodi, Elias Kempf, Fazl Barez, Thomas Brox View a PDF of the paper titled Towards Understanding Subliminal Learning: When and How Hidden Biases Transfer, by Simon Schrodi and 3 other authors View PDF HTML (experimental) Abstract:Language models can transfer hidden biases during distillation. For example, a teacher that "likes owls" can make its student "like owls" too, even when the training data consists only of lists of numbers. This surprising phenomenon is called subliminal learning. Subliminal learning can be expected under soft distillation, where the student is trained on the teacher's full next-token distribution. But the fact that this also occurs under hard distillation-where the student only sees sampled tokens-raises a deeper question: when and how does subliminal learning actually occur? We answer this question through controlled experiments and mechanistic analysis. Our results show that subliminal learning does not need (global) token entanglement or logit leakage. Instead, it comes down to a small set of divergence tokens-rare cases where teachers with different biases would predict different tokens. Masking out these tokens mostly removes the hidden bias transfer. Mechanistically, divergence tokens reveal that early layers a...