[2602.16918] Xray-Visual Models: Scaling Vision models on Industry Scale Data
Summary
The paper presents Xray-Visual, a novel vision model architecture designed for large-scale image and video understanding, utilizing extensive social media data for training.
Why It Matters
Xray-Visual addresses the challenges of scaling vision models by leveraging vast datasets and advanced training techniques, setting new benchmarks in multimodal understanding. Its implications for industries relying on image and video analysis are significant, enhancing performance and robustness in real-world applications.
Key Takeaways
- Xray-Visual utilizes over 15 billion curated image-text pairs for training.
- The model employs a three-stage training pipeline combining self-supervised and semi-supervised learning.
- It achieves state-of-the-art performance across multiple benchmarks, including ImageNet and Kinetics.
- Integration with large language models enhances retrieval performance and generalization.
- The architecture maintains high computational efficiency while improving robustness to domain shifts.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.16918 (cs) [Submitted on 18 Feb 2026] Title:Xray-Visual Models: Scaling Vision models on Industry Scale Data Authors:Shlok Mishra, Tsung-Yu Lin, Linda Wang, Hongli Xu, Yimin Liu, Michael Hsu, Chaitanya Ahuja, Hao Yuan, Jianpeng Cheng, Hong-You Chen, Haoyuan Xu, Chao Li, Abhijeet Awasthi, Jihye Moon, Don Husa, Michael Ge, Sumedha Singla, Arkabandhu Chowdhury, Phong Dingh, Satya Narayan Shukla, Yonghuan Yang, David Jacobs, Qi Guo, Jun Xiao, Xiangjun Fan, Aashu Singh View a PDF of the paper titled Xray-Visual Models: Scaling Vision models on Industry Scale Data, by Shlok Mishra and 25 other authors View PDF HTML (experimental) Abstract:We present Xray-Visual, a unified vision model architecture for large-scale image and video understanding trained on industry-scale social media data. Our model leverages over 15 billion curated image-text pairs and 10 billion video-hashtag pairs from Facebook and Instagram, employing robust data curation pipelines that incorporate balancing and noise suppression strategies to maximize semantic diversity while minimizing label noise. We introduce a three-stage training pipeline that combines self-supervised MAE, semi-supervised hashtag classification, and CLIP-style contrastive learning to jointly optimize image and video modalities. Our architecture builds on a Vision Transformer backbone enhanced with efficient token reorganization (EViT) for improved computational efficien...