[2602.20650] Dataset Color Quantization: A Training-Oriented Framework for Dataset-Level Compression
Summary
The paper presents Dataset Color Quantization (DCQ), a framework designed to compress large-scale image datasets by reducing color-space redundancy while preserving essential training information, enhancing model performance even under aggressive compression.
Why It Matters
As deep learning increasingly relies on large image datasets, effective compression methods are crucial for deployment in resource-constrained environments. DCQ addresses the challenge of high storage demands by optimizing color representation, which can significantly improve training efficiency and reduce costs in data handling.
Key Takeaways
- DCQ compresses datasets by reducing color-space redundancy.
- The framework maintains essential colors and structural details for effective model training.
- Extensive experiments show improved training performance even with aggressive compression.
- DCQ provides a scalable solution for dataset-level storage reduction.
- The approach is relevant for various datasets, including CIFAR and ImageNet.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.20650 (cs) [Submitted on 24 Feb 2026] Title:Dataset Color Quantization: A Training-Oriented Framework for Dataset-Level Compression Authors:Chenyue Yu, Lingao Xiao, Jinhong Deng, Ivor W. Tsang, Yang He View a PDF of the paper titled Dataset Color Quantization: A Training-Oriented Framework for Dataset-Level Compression, by Chenyue Yu and 4 other authors View PDF HTML (experimental) Abstract:Large-scale image datasets are fundamental to deep learning, but their high storage demands pose challenges for deployment in resource-constrained environments. While existing approaches reduce dataset size by discarding samples, they often ignore the significant redundancy within each image -- particularly in the color space. To address this, we propose Dataset Color Quantization (DCQ), a unified framework that compresses visual datasets by reducing color-space redundancy while preserving information crucial for model training. DCQ achieves this by enforcing consistent palette representations across similar images, selectively retaining semantically important colors guided by model perception, and maintaining structural details necessary for effective feature learning. Extensive experiments across CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet-1K show that DCQ significantly improves training performance under aggressive compression, offering a scalable and robust solution for dataset-level storage reduction. Code i...