[2602.00044] When LLMs Imagine People: A Human-Centered Persona Brainstorm Audit for Bias and Fairness in Creative Applications

[2602.00044] When LLMs Imagine People: A Human-Centered Persona Brainstorm Audit for Bias and Fairness in Creative Applications

arXiv - AI 4 min read Article

Summary

This paper introduces the Persona Brainstorm Audit (PBA), a method for assessing bias in Large Language Models (LLMs) used in creative applications, highlighting the non-linear evolution of bias across model generations.

Why It Matters

As LLMs become integral in creative workflows, understanding and mitigating bias is crucial for fairness and equity. The PBA method offers a scalable approach to audit biases that traditional metrics overlook, ensuring more responsible AI development.

Key Takeaways

  • PBA provides a novel framework for auditing bias in LLM-generated personas.
  • Bias in LLMs can evolve nonlinearly, with larger models not always being fairer.
  • Intersectional analysis reveals hidden disparities that single-axis metrics may miss.
  • PBA maintains stability across varying sample sizes and prompts, ensuring reliable audits.
  • The method emphasizes the importance of fairness in AI applications, especially in creative contexts.

Computer Science > Computers and Society arXiv:2602.00044 (cs) [Submitted on 19 Jan 2026 (v1), last revised 24 Feb 2026 (this version, v2)] Title:When LLMs Imagine People: A Human-Centered Persona Brainstorm Audit for Bias and Fairness in Creative Applications Authors:Hongliu Cao, Eoin Thomas, Rodrigo Acuna Agost View a PDF of the paper titled When LLMs Imagine People: A Human-Centered Persona Brainstorm Audit for Bias and Fairness in Creative Applications, by Hongliu Cao and 2 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) used in creative workflows can reinforce stereotypes and perpetuate inequities, making fairness auditing essential. Existing methods rely on constrained tasks and fixed benchmarks, leaving open-ended creative outputs unexamined. We introduce the Persona Brainstorm Audit (PBA), a scalable and easy to extend auditing method for bias detection across multiple intersecting identity and social roles in open-ended persona generation. PBA quantifies bias using degree-of-freedom-aware normalized Cramér's V, producing interpretable severity labels that enable fair comparison across models and dimensions. Applying PBA to 12 LLMs (120,000 personas, 16 bias dimensions), we find that bias evolves nonlinearly across model generations: larger and newer models are not consistently fairer, and biases that initially decrease can resurface in later releases. Intersectional analysis reveals disparities hidden by single-axis metrics, where ...

Related Articles

Llms

I stopped using Claude like a chatbot — 7 prompt shifts that reclaimed 10 hours of my week

submitted by /u/ThereWas [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

What features do you actually want in an AI chatbot that nobody has built yet?

Hey everyone 👋 I'm building a new AI chat app and before I build anything I want to hear from real users first. Current AI tools like Cha...

Reddit - Artificial Intelligence · 1 min ·
Llms

So, what exactly is going on with the Claude usage limits?

I'm extremely new to AI and am building a local agent for fun. I purchased a Claude Pro account because it helped me a lot in the past wh...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why the Reddit Hate of AI?

I just went through a project where a builder wanted to build a really large building on a small lot next door. The project needed 6 vari...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime