[2602.20408] Examining and Addressing Barriers to Diversity in LLM-Generated Ideas
Summary
This article explores the limitations of diversity in ideas generated by large language models (LLMs) compared to human creativity, identifying mechanisms that contribute to this issue and proposing interventions to enhance diversity.
Why It Matters
As LLMs become more integrated into creative processes, understanding and addressing their limitations in idea generation is crucial for fostering innovation. This research highlights the risks of homogenization in ideation and offers practical strategies to mitigate these effects, ensuring that AI can complement human creativity without stifling it.
Key Takeaways
- LLMs tend to produce less diverse ideas than humans due to fixation and knowledge aggregation.
- Targeted prompting techniques, like Chain-of-Thought prompting, can enhance idea diversity in LLMs.
- Using diverse personas as prompts can improve knowledge partitioning and stimulate creativity.
- Combining prompting strategies yields the highest diversity in generated ideas.
- Understanding these mechanisms is essential for effective human-AI collaboration in innovation.
Computer Science > Computers and Society arXiv:2602.20408 (cs) [Submitted on 23 Feb 2026] Title:Examining and Addressing Barriers to Diversity in LLM-Generated Ideas Authors:Yuting Deng, Melanie Brucks, Olivier Toubia View a PDF of the paper titled Examining and Addressing Barriers to Diversity in LLM-Generated Ideas, by Yuting Deng and 2 other authors View PDF HTML (experimental) Abstract:Ideas generated by independent samples of humans tend to be more diverse than ideas generated from independent LLM samples, raising concerns that widespread reliance on LLMs could homogenize ideation and undermine innovation at a societal level. Drawing on cognitive psychology, we identify (both theoretically and empirically) two mechanisms undermining LLM idea diversity. First, at the individual level, LLMs exhibit fixation just as humans do, where early outputs constrain subsequent ideation. Second, at the collective level, LLMs aggregate knowledge into a unified distribution rather than exhibiting the knowledge partitioning inherent to human populations, where each person occupies a distinct region of the knowledge space. Through four studies, we demonstrate that targeted prompting interventions can address each mechanism independently: Chain-of-Thought (CoT) prompting reduces fixation by encouraging structured reasoning (only in LLMs, not humans), while ordinary personas (versus "creative entrepreneurs" such as Steve Jobs) improve knowledge partitioning by serving as diverse sampling...