[2602.20408] Examining and Addressing Barriers to Diversity in LLM-Generated Ideas

[2602.20408] Examining and Addressing Barriers to Diversity in LLM-Generated Ideas

arXiv - AI 4 min read Article

Summary

This article explores the limitations of diversity in ideas generated by large language models (LLMs) compared to human creativity, identifying mechanisms that contribute to this issue and proposing interventions to enhance diversity.

Why It Matters

As LLMs become more integrated into creative processes, understanding and addressing their limitations in idea generation is crucial for fostering innovation. This research highlights the risks of homogenization in ideation and offers practical strategies to mitigate these effects, ensuring that AI can complement human creativity without stifling it.

Key Takeaways

  • LLMs tend to produce less diverse ideas than humans due to fixation and knowledge aggregation.
  • Targeted prompting techniques, like Chain-of-Thought prompting, can enhance idea diversity in LLMs.
  • Using diverse personas as prompts can improve knowledge partitioning and stimulate creativity.
  • Combining prompting strategies yields the highest diversity in generated ideas.
  • Understanding these mechanisms is essential for effective human-AI collaboration in innovation.

Computer Science > Computers and Society arXiv:2602.20408 (cs) [Submitted on 23 Feb 2026] Title:Examining and Addressing Barriers to Diversity in LLM-Generated Ideas Authors:Yuting Deng, Melanie Brucks, Olivier Toubia View a PDF of the paper titled Examining and Addressing Barriers to Diversity in LLM-Generated Ideas, by Yuting Deng and 2 other authors View PDF HTML (experimental) Abstract:Ideas generated by independent samples of humans tend to be more diverse than ideas generated from independent LLM samples, raising concerns that widespread reliance on LLMs could homogenize ideation and undermine innovation at a societal level. Drawing on cognitive psychology, we identify (both theoretically and empirically) two mechanisms undermining LLM idea diversity. First, at the individual level, LLMs exhibit fixation just as humans do, where early outputs constrain subsequent ideation. Second, at the collective level, LLMs aggregate knowledge into a unified distribution rather than exhibiting the knowledge partitioning inherent to human populations, where each person occupies a distinct region of the knowledge space. Through four studies, we demonstrate that targeted prompting interventions can address each mechanism independently: Chain-of-Thought (CoT) prompting reduces fixation by encouraging structured reasoning (only in LLMs, not humans), while ordinary personas (versus "creative entrepreneurs" such as Steve Jobs) improve knowledge partitioning by serving as diverse sampling...

Related Articles

Llms

[P] Remote sensing foundation models made easy to use.

This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://git...

Reddit - Machine Learning · 1 min ·
Llms

I stopped using Claude like a chatbot — 7 prompt shifts that reclaimed 10 hours of my week

submitted by /u/ThereWas [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

What features do you actually want in an AI chatbot that nobody has built yet?

Hey everyone 👋 I'm building a new AI chat app and before I build anything I want to hear from real users first. Current AI tools like Cha...

Reddit - Artificial Intelligence · 1 min ·
Llms

So, what exactly is going on with the Claude usage limits?

I'm extremely new to AI and am building a local agent for fun. I purchased a Claude Pro account because it helped me a lot in the past wh...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime