[2602.19946] When Pretty Isn't Useful: Investigating Why Modern Text-to-Image Models Fail as Reliable Training Data Generators
Summary
This paper investigates the limitations of modern text-to-image models as reliable training data generators, revealing a decline in classification accuracy despite advancements in visual fidelity.
Why It Matters
As the reliance on synthetic data grows in machine learning, understanding the limitations of text-to-image models is crucial. This study challenges the assumption that improved generative realism translates to better data realism, urging researchers to reconsider their use in training datasets.
Key Takeaways
- Text-to-image models show impressive visual fidelity but fail as reliable training data generators.
- Classification accuracy declines when using newer T2I models for synthetic data generation.
- The models tend to produce a narrow, aesthetic-centric distribution, undermining diversity.
- There's a critical need to reassess the capabilities of T2I models in vision research.
- The findings challenge the assumption that advancements in generative models equate to improvements in data quality.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.19946 (cs) [Submitted on 23 Feb 2026] Title:When Pretty Isn't Useful: Investigating Why Modern Text-to-Image Models Fail as Reliable Training Data Generators Authors:Krzysztof Adamkiewicz, Brian Moser, Stanislav Frolov, Tobias Christian Nauen, Federico Raue, Andreas Dengel View a PDF of the paper titled When Pretty Isn't Useful: Investigating Why Modern Text-to-Image Models Fail as Reliable Training Data Generators, by Krzysztof Adamkiewicz and 5 other authors View PDF HTML (experimental) Abstract:Recent text-to-image (T2I) diffusion models produce visually stunning images and demonstrate excellent prompt following. But do they perform well as synthetic vision data generators? In this work, we revisit the promise of synthetic data as a scalable substitute for real training sets and uncover a surprising performance regression. We generate large-scale synthetic datasets using state-of-the-art T2I models released between 2022 and 2025, train standard classifiers solely on this synthetic data, and evaluate them on real test data. Despite observable advances in visual fidelity and prompt adherence, classification accuracy on real test data consistently declines with newer T2I models as training data generators. Our analysis reveals a hidden trend: These models collapse to a narrow, aesthetic-centric distribution that undermines diversity and label-image alignment. Overall, our findings challenge a growing ass...