[2603.01214] Reasoning Boosts Opinion Alignment in LLMs
About this article
Abstract page for arXiv paper 2603.01214: Reasoning Boosts Opinion Alignment in LLMs
Computer Science > Computation and Language arXiv:2603.01214 (cs) [Submitted on 1 Mar 2026] Title:Reasoning Boosts Opinion Alignment in LLMs Authors:Frédéric Berdoz, Yann Billeter, Yann Vonlanthen, Roger Wattenhofer View a PDF of the paper titled Reasoning Boosts Opinion Alignment in LLMs, by Fr\'ed\'eric Berdoz and 3 other authors View PDF HTML (experimental) Abstract:Opinion modeling aims to capture individual or group political preferences, enabling applications such as digital democracies, where models could help shape fairer and more popular policies. Given their versatility, strong generalization capabilities, and demonstrated success across diverse text-to-text applications, large language models (LLMs) are natural candidates for this task. However, due to their statistical nature and limited causal understanding, they tend to produce biased opinions when prompted naively. In this work, we study whether reasoning can improve opinion alignment. Motivated by the recent advancement in mathematical reasoning enabled by reinforcement learning (RL), we train models to produce profile-consistent answers through structured reasoning. We evaluate our approach on three datasets covering U.S., European, and Swiss politics. Results indicate that reasoning enhances opinion modeling and is competitive with strong baselines, but does not fully remove bias, highlighting the need for additional mechanisms to build faithful political digital twins using LLMs. By releasing both our me...