[2601.10160] Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment
Summary
The paper explores how AI discourse influences the alignment of large language models (LLMs), revealing that negative narratives can lead to self-fulfilling misalignment. Through controlled studies, it demonstrates that the nature of pretraining data significantly affects AI b...
Why It Matters
Understanding the impact of AI discourse on model alignment is crucial for developers and researchers in AI safety. This research highlights the importance of curating pretraining datasets to foster aligned behaviors in AI systems, ultimately contributing to safer AI deployment.
Key Takeaways
- Negative AI discourse can lead to self-fulfilling misalignment in LLMs.
- Pretraining data significantly shapes alignment behaviors in AI systems.
- Upsampling aligned discourse can reduce misalignment scores dramatically.
- The study emphasizes the need for careful curation of pretraining datasets.
- Findings suggest alignment pretraining should be prioritized alongside capabilities.
Computer Science > Computation and Language arXiv:2601.10160 (cs) [Submitted on 15 Jan 2026 (v1), last revised 19 Feb 2026 (this version, v2)] Title:Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment Authors:Cameron Tice, Puria Radmard, Samuel Ratnam, Andy Kim, David Africa, Kyle O'Brien View a PDF of the paper titled Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment, by Cameron Tice and 5 other authors View PDF HTML (experimental) Abstract:Pretraining corpora contain extensive discourse about AI systems, yet the causal influence of this discourse on downstream alignment remains poorly understood. If prevailing descriptions of AI behaviour are predominantly negative, LLMs may internalise corresponding behavioural priors, giving rise to self-fulfilling misalignment. This paper provides the first controlled study of this hypothesis by pretraining 6.9B-parameter LLMs with varying amounts of (mis)alignment discourse. We find that discussion of AI contributes to misalignment. Upsampling synthetic training documents about AI misalignment leads to a notable increase in misaligned behaviour. Conversely, upsampling documents about aligned behaviour reduces misalignment scores from 45% to 9%. We consider this evidence of self-fulfilling alignment. These effects are dampened, but persist through post-training. Our findings establish the study of how pretraining data shapes alignment priors, or alignment pretraining, as a complement to po...