[2603.14672] Seamless Deception: Larger Language Models Are Better Knowledge Concealers
About this article
Abstract page for arXiv paper 2603.14672: Seamless Deception: Larger Language Models Are Better Knowledge Concealers
Computer Science > Computation and Language arXiv:2603.14672 (cs) [Submitted on 15 Mar 2026 (v1), last revised 20 Mar 2026 (this version, v2)] Title:Seamless Deception: Larger Language Models Are Better Knowledge Concealers Authors:Dhananjay Ashok, Ruth-Ann Armstrong, Jonathan May View a PDF of the paper titled Seamless Deception: Larger Language Models Are Better Knowledge Concealers, by Dhananjay Ashok and 1 other authors View PDF HTML (experimental) Abstract:Language Models (LMs) may acquire harmful knowledge, and yet feign ignorance of these topics when under audit. Inspired by the recent discovery of deception-related behaviour patterns in LMs, we aim to train classifiers that detect when a LM is actively concealing knowledge. Initial findings on smaller models show that classifiers can detect concealment more reliably than human evaluators, with gradient-based concealment proving easier to identify than prompt-based methods. However, contrary to prior work, we find that the classifiers do not reliably generalize to unseen model architectures and topics of hidden knowledge. Most concerningly, the identifiable traces associated with concealment become fainter as the models increase in scale, with the classifiers achieving no better than random performance on any model exceeding 70 billion parameters. Our results expose a key limitation in black-box-only auditing of LMs and highlight the need to develop robust methods to detect models that are actively hiding the knowle...