[2602.17929] ZACH-ViT: Regime-Dependent Inductive Bias in Compact Vision Transformers for Medical Imaging
Summary
ZACH-ViT introduces a novel Vision Transformer architecture tailored for medical imaging, enhancing performance by removing fixed spatial priors, thus improving adaptability in clinical settings.
Why It Matters
This research addresses the limitations of traditional Vision Transformers in medical imaging, where spatial layout can be inconsistent. By proposing ZACH-ViT, the study emphasizes the importance of aligning model architecture with data characteristics, potentially improving diagnostic accuracy in resource-constrained environments.
Key Takeaways
- ZACH-ViT removes positional embeddings and the [CLS] token to enhance adaptability.
- The model demonstrates superior performance in datasets with weak spatial information.
- Empirical results show that architectural inductive bias is crucial for model effectiveness.
- ZACH-ViT maintains competitive performance with minimal parameters and fast inference times.
- The findings support the need for tailored models in specific application domains like medical imaging.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.17929 (cs) [Submitted on 20 Feb 2026] Title:ZACH-ViT: Regime-Dependent Inductive Bias in Compact Vision Transformers for Medical Imaging Authors:Athanasios Angelakis View a PDF of the paper titled ZACH-ViT: Regime-Dependent Inductive Bias in Compact Vision Transformers for Medical Imaging, by Athanasios Angelakis View PDF HTML (experimental) Abstract:Vision Transformers rely on positional embeddings and class tokens that encode fixed spatial priors. While effective for natural images, these priors may hinder generalization when spatial layout is weakly informative or inconsistent, a frequent condition in medical imaging and edge-deployed clinical systems. We introduce ZACH-ViT (Zero-token Adaptive Compact Hierarchical Vision Transformer), a compact Vision Transformer that removes both positional embeddings and the [CLS] token, achieving permutation invariance through global average pooling over patch representations. The term "Zero-token" specifically refers to removing the dedicated [CLS] aggregation token and positional embeddings; patch tokens remain unchanged and are processed normally. Adaptive residual projections preserve training stability in compact configurations while maintaining a strict parameter budget. Evaluation is performed across seven MedMNIST datasets spanning binary and multi-class tasks under a strict few-shot protocol (50 samples per class, fixed hyperparameters, five random seeds)...