[2602.20650] Dataset Color Quantization: A Training-Oriented Framework for Dataset-Level Compression

[2602.20650] Dataset Color Quantization: A Training-Oriented Framework for Dataset-Level Compression

arXiv - AI 3 min read Article

Summary

The paper presents Dataset Color Quantization (DCQ), a framework designed to compress large-scale image datasets by reducing color-space redundancy while preserving essential training information, enhancing model performance even under aggressive compression.

Why It Matters

As deep learning increasingly relies on large image datasets, effective compression methods are crucial for deployment in resource-constrained environments. DCQ addresses the challenge of high storage demands by optimizing color representation, which can significantly improve training efficiency and reduce costs in data handling.

Key Takeaways

  • DCQ compresses datasets by reducing color-space redundancy.
  • The framework maintains essential colors and structural details for effective model training.
  • Extensive experiments show improved training performance even with aggressive compression.
  • DCQ provides a scalable solution for dataset-level storage reduction.
  • The approach is relevant for various datasets, including CIFAR and ImageNet.

Computer Science > Computer Vision and Pattern Recognition arXiv:2602.20650 (cs) [Submitted on 24 Feb 2026] Title:Dataset Color Quantization: A Training-Oriented Framework for Dataset-Level Compression Authors:Chenyue Yu, Lingao Xiao, Jinhong Deng, Ivor W. Tsang, Yang He View a PDF of the paper titled Dataset Color Quantization: A Training-Oriented Framework for Dataset-Level Compression, by Chenyue Yu and 4 other authors View PDF HTML (experimental) Abstract:Large-scale image datasets are fundamental to deep learning, but their high storage demands pose challenges for deployment in resource-constrained environments. While existing approaches reduce dataset size by discarding samples, they often ignore the significant redundancy within each image -- particularly in the color space. To address this, we propose Dataset Color Quantization (DCQ), a unified framework that compresses visual datasets by reducing color-space redundancy while preserving information crucial for model training. DCQ achieves this by enforcing consistent palette representations across similar images, selectively retaining semantically important colors guided by model perception, and maintaining structural details necessary for effective feature learning. Extensive experiments across CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet-1K show that DCQ significantly improves training performance under aggressive compression, offering a scalable and robust solution for dataset-level storage reduction. Code i...

Related Articles

[2603.18940] Entropy trajectory shape predicts LLM reasoning reliability: A diagnostic study of uncertainty dynamics in chain-of-thought
Llms

[2603.18940] Entropy trajectory shape predicts LLM reasoning reliability: A diagnostic study of uncertainty dynamics in chain-of-thought

Abstract page for arXiv paper 2603.18940: Entropy trajectory shape predicts LLM reasoning reliability: A diagnostic study of uncertainty ...

arXiv - Machine Learning · 3 min ·
[2512.20620] Uncovering Patterns of Brain Activity from EEG Data Consistently Associated with Cybersickness Using Neural Network Interpretability Maps
Machine Learning

[2512.20620] Uncovering Patterns of Brain Activity from EEG Data Consistently Associated with Cybersickness Using Neural Network Interpretability Maps

Abstract page for arXiv paper 2512.20620: Uncovering Patterns of Brain Activity from EEG Data Consistently Associated with Cybersickness ...

arXiv - Machine Learning · 4 min ·
[2512.13607] Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models
Machine Learning

[2512.13607] Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models

Abstract page for arXiv paper 2512.13607: Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models

arXiv - Machine Learning · 4 min ·
[2512.02650] Hear What Matters! Text-conditioned Selective Video-to-Audio Generation
Machine Learning

[2512.02650] Hear What Matters! Text-conditioned Selective Video-to-Audio Generation

Abstract page for arXiv paper 2512.02650: Hear What Matters! Text-conditioned Selective Video-to-Audio Generation

arXiv - Machine Learning · 3 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime