Gemini, ChatGPT and most other AI chatbots think alike, and it’s bad for human creativity
About this article
AI chatbots may feel creative on their own, but new research shows they often converge on the same ideas, raising concerns that relying on them could quietly narrow human creativity.
AI chatbots are supposed to expand your creativity, not quietly narrow it. But new research suggests that’s exactly what may be happening when you rely on them too heavily.A study published in Engineering Applications of Artificial Intelligence shows that leading models, including Gemini, GPT, and Llama, often land in the same conceptual territory when tackling creative tasks. On their own, many responses feel original and useful. When you zoom out, though, a different pattern emerges. Across many prompts and users, outputs begin to converge.Researchers compared human participants with a wide range of AI models using standard creativity tests, like brainstorming new uses for everyday objects or listing unrelated words. Individually, AI held up well. As a group, its ideas were far less spread out.Different bots, same patternsThe team didn’t focus on just one system. It tested more than 20 models from different companies against over 100 people. The outcome stayed consistent across the board. AI responses showed a tighter range, even when the models came from different families.Gemini and ChatGPT are two of the most popular AI companions. Google, OpenAI / Google, OpenAIWhen mapped for similarity, chatbot answers clustered closely together, while human responses covered a much wider space.AdvertisementThat same pattern showed up across tasks. Whether generating ideas or unrelated concepts, models leaned on familiar structures and repeated phrasing.Attempts to push more variet...