[2602.13246] Global AI Bias Audit for Technical Governance
Summary
This article discusses a global audit of Large Language Models (LLMs) focusing on geographic and socioeconomic biases in AI governance, highlighting disparities between the Global North and South.
Why It Matters
Understanding AI biases is crucial for ensuring equitable governance and safety in AI deployment. This audit reveals significant gaps in data access and knowledge, which could exacerbate existing inequalities, making it essential for policymakers to address these disparities.
Key Takeaways
- The audit identified significant biases in AI knowledge and responses based on geographic and socioeconomic factors.
- Only 11.4% of the model's responses were factually verifiable, indicating a risk of misinformation.
- Higher-income regions dominate AI technical knowledge, leaving lower-income countries at a disadvantage.
- The findings underscore the need for inclusive data representation in AI training.
- Policymakers in underserved regions may lack reliable insights, risking global AI safety.
Computer Science > Computers and Society arXiv:2602.13246 (cs) [Submitted on 1 Feb 2026] Title:Global AI Bias Audit for Technical Governance Authors:Jason Hung View a PDF of the paper titled Global AI Bias Audit for Technical Governance, by Jason Hung View PDF Abstract:This paper presents the outputs of the exploratory phase of a global audit of Large Language Models (LLMs) project. In this exploratory phase, I used the Global AI Dataset (GAID) Project as a framework to stress-test the Llama-3 8B model and evaluate geographic and socioeconomic biases in technical AI governance awareness. By stress-testing the model with 1,704 queries across 213 countries and eight technical metrics, I identified a significant digital barrier and gap separating the Global North and South. The results indicate that the model was only able to provide number/fact responses in 11.4% of its query answers, where the empirical validity of such responses was yet to be verified. The findings reveal that AI's technical knowledge is heavily concentrated in higher-income regions, while lower-income countries from the Global South are subject to disproportionate systemic information gaps. This disparity between the Global North and South poses concerning risks for global AI safety and inclusive governance, as policymakers in underserved regions may lack reliable data-driven insights or be misled by hallucinated facts. This paper concludes that current AI alignment and training processes reinforce existi...