[2602.20731] Communication-Inspired Tokenization for Structured Image Representations

[2602.20731] Communication-Inspired Tokenization for Structured Image Representations

arXiv - Machine Learning 4 min read Article

Summary

The paper presents COMmunication inspired Tokenization (COMiT), a novel framework for structured image representations that enhances object-level semantic understanding in visual tokenization.

Why It Matters

This research addresses limitations in existing image tokenization methods, which often focus on local textures rather than semantic structures. By introducing a communication-inspired approach, it aims to improve the interpretability and generalization of visual representations, which is crucial for advancing computer vision applications.

Key Takeaways

  • COMiT improves tokenization by focusing on structured visual representations.
  • The framework integrates localized image observations to refine token sequences.
  • It enhances compositional generalization and relational reasoning in models.
  • The approach is implemented within a single transformer model for efficiency.
  • Semantic alignment is crucial for grounding and interpretability in visual tasks.

Computer Science > Computer Vision and Pattern Recognition arXiv:2602.20731 (cs) [Submitted on 24 Feb 2026] Title:Communication-Inspired Tokenization for Structured Image Representations Authors:Aram Davtyan, Yusuf Sahin, Yasaman Haghighi, Sebastian Stapf, Pablo Acuaviva, Alexandre Alahi, Paolo Favaro View a PDF of the paper titled Communication-Inspired Tokenization for Structured Image Representations, by Aram Davtyan and 6 other authors View PDF HTML (experimental) Abstract:Discrete image tokenizers have emerged as a key component of modern vision and multimodal systems, providing a sequential interface for transformer-based architectures. However, most existing approaches remain primarily optimized for reconstruction and compression, often yielding tokens that capture local texture rather than object-level semantic structure. Inspired by the incremental and compositional nature of human communication, we introduce COMmunication inspired Tokenization (COMiT), a framework for learning structured discrete visual token sequences. COMiT constructs a latent message within a fixed token budget by iteratively observing localized image crops and recurrently updating its discrete representation. At each step, the model integrates new visual information while refining and reorganizing the existing token sequence. After several encoding iterations, the final message conditions a flow-matching decoder that reconstructs the full image. Both encoding and decoding are implemented with...

Related Articles

[2603.18940] Entropy trajectory shape predicts LLM reasoning reliability: A diagnostic study of uncertainty dynamics in chain-of-thought
Llms

[2603.18940] Entropy trajectory shape predicts LLM reasoning reliability: A diagnostic study of uncertainty dynamics in chain-of-thought

Abstract page for arXiv paper 2603.18940: Entropy trajectory shape predicts LLM reasoning reliability: A diagnostic study of uncertainty ...

arXiv - Machine Learning · 3 min ·
[2512.20620] Uncovering Patterns of Brain Activity from EEG Data Consistently Associated with Cybersickness Using Neural Network Interpretability Maps
Machine Learning

[2512.20620] Uncovering Patterns of Brain Activity from EEG Data Consistently Associated with Cybersickness Using Neural Network Interpretability Maps

Abstract page for arXiv paper 2512.20620: Uncovering Patterns of Brain Activity from EEG Data Consistently Associated with Cybersickness ...

arXiv - Machine Learning · 4 min ·
[2512.13607] Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models
Machine Learning

[2512.13607] Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models

Abstract page for arXiv paper 2512.13607: Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models

arXiv - Machine Learning · 4 min ·
[2512.02650] Hear What Matters! Text-conditioned Selective Video-to-Audio Generation
Machine Learning

[2512.02650] Hear What Matters! Text-conditioned Selective Video-to-Audio Generation

Abstract page for arXiv paper 2512.02650: Hear What Matters! Text-conditioned Selective Video-to-Audio Generation

arXiv - Machine Learning · 3 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime