[2512.15792] A Systematic Analysis of Biases in Large Language Models
About this article
Abstract page for arXiv paper 2512.15792: A Systematic Analysis of Biases in Large Language Models
Computer Science > Computers and Society arXiv:2512.15792 (cs) [Submitted on 16 Dec 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:A Systematic Analysis of Biases in Large Language Models Authors:Xulang Zhang, Rui Mao, Erik Cambria View a PDF of the paper titled A Systematic Analysis of Biases in Large Language Models, by Xulang Zhang and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have rapidly become indispensable tools for acquiring information and supporting human decision-making. However, ensuring that these models uphold fairness across varied contexts is critical to their safe and responsible deployment. In this study, we undertake a comprehensive examination of four widely adopted LLMs, probing their underlying biases and inclinations across the dimensions of politics, ideology, alliance, language, and gender. Through a series of carefully designed experiments, we investigate their political neutrality using news summarization, ideological biases through news stance classification, tendencies toward specific geopolitical alliances via United Nations voting patterns, language bias in the context of multilingual story completion, and gender-related affinities as revealed by responses to the World Values Survey. Results indicate that while the LLMs are aligned to be neutral and impartial, they still show biases and affinities of different types. Subjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI)...