[2509.22367] What Is The Political Content in LLMs' Pre- and Post-Training Data?
About this article
Abstract page for arXiv paper 2509.22367: What Is The Political Content in LLMs' Pre- and Post-Training Data?
Computer Science > Computation and Language arXiv:2509.22367 (cs) [Submitted on 26 Sep 2025 (v1), last revised 3 Apr 2026 (this version, v2)] Title:What Is The Political Content in LLMs' Pre- and Post-Training Data? Authors:Tanise Ceron, Dmitry Nikolaev, Dominik Stammbach, Debora Nozza View a PDF of the paper titled What Is The Political Content in LLMs' Pre- and Post-Training Data?, by Tanise Ceron and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are known to generate politically biased text. Yet, it remains unclear how such biases arise, making it difficult to design effective mitigation strategies. We hypothesize that these biases are rooted in the composition of training data. Taking a data-centric perspective, we formulate research questions on (1) political leaning present in data, (2) data imbalance, (3) cross-dataset similarity, and (4) data-model alignment. We then examine how exposure to political content relates to models' stances on policy issues. We analyze the political content of pre- and post-training datasets of open-source LLMs, combining large-scale sampling, political-leaning classification, and stance detection. We find that training data is systematically skewed toward left-leaning content, with pre-training corpora containing substantially more politically engaged material than post-training data. We further observe a strong correlation between political stances in training data and model behavior, and show that ...