[2604.08915] Large-Scale Universal Defect Generation: Foundation Models and Datasets
About this article
Abstract page for arXiv paper 2604.08915: Large-Scale Universal Defect Generation: Foundation Models and Datasets
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.08915 (cs) [Submitted on 10 Apr 2026] Title:Large-Scale Universal Defect Generation: Foundation Models and Datasets Authors:Yuanting Fan, Jun Liu, Bin-Bin Gao, Xiaochen Chen, Yuhuan Lin, Zhewei Dai, Jiawei Zhan, Chengjie Wang View a PDF of the paper titled Large-Scale Universal Defect Generation: Foundation Models and Datasets, by Yuanting Fan and 7 other authors View PDF HTML (experimental) Abstract:Existing defect/anomaly generation methods often rely on few-shot learning, which overfits to specific defect categories due to the lack of large-scale paired defect editing data. This issue is aggravated by substantial variations in defect scale and morphology, resulting in limited generalization, degraded realism, and category consistency. We address these challenges by introducing UDG, a large-scale dataset of 300K normal-abnormal-mask-caption quadruplets spanning diverse domains, and by presenting UniDG, a universal defect generation foundation model that supports both reference-based defect generation and text instruction-based defect editing without per-category fine-tuning. UniDG performs Defect-Context Editing via adaptive defect cropping and structured diptych input format, and fuses reference and target conditions through MM-DiT multimodal attention. A two-stage training strategy, Diversity-SFT followed by Consistency-RFT, further improves diversity while enhancing realism and reference consistency...