[2602.22678] ViCLIP-OT: The First Foundation Vision-Language Model for Vietnamese Image-Text Retrieval with Optimal Transport

[2602.22678] ViCLIP-OT: The First Foundation Vision-Language Model for Vietnamese Image-Text Retrieval with Optimal Transport

arXiv - AI 4 min read Article

Summary

ViCLIP-OT introduces a novel vision-language model tailored for Vietnamese image-text retrieval, outperforming existing models in low-resource settings.

Why It Matters

This research addresses the gap in effective image-text retrieval systems for low-resource languages like Vietnamese, showcasing the potential for improved multimedia systems in underrepresented linguistic contexts. By enhancing model performance through innovative techniques, it contributes to the broader field of AI and accessibility.

Key Takeaways

  • ViCLIP-OT is specifically designed for Vietnamese image-text retrieval.
  • The model outperforms existing benchmarks like CLIP and SigLIP in various settings.
  • Integration of SIGROT enhances cross-modal consistency and reduces modality gaps.
  • Achieved significant improvements in Recall@K metrics on Vietnamese datasets.
  • Demonstrates the importance of tailored models for low-resource languages.

Computer Science > Computer Vision and Pattern Recognition arXiv:2602.22678 (cs) [Submitted on 26 Feb 2026] Title:ViCLIP-OT: The First Foundation Vision-Language Model for Vietnamese Image-Text Retrieval with Optimal Transport Authors:Quoc-Khang Tran, Minh-Thien Nguyen, Nguyen-Khang Pham View a PDF of the paper titled ViCLIP-OT: The First Foundation Vision-Language Model for Vietnamese Image-Text Retrieval with Optimal Transport, by Quoc-Khang Tran and Minh-Thien Nguyen and Nguyen-Khang Pham View PDF HTML (experimental) Abstract:Image-text retrieval has become a fundamental component in intelligent multimedia systems; however, most existing vision-language models are optimized for highresource languages and remain suboptimal for low-resource settings such as Vietnamese. This work introduces ViCLIP-OT, a foundation vision-language model specifically designed for Vietnamese image-text retrieval. The proposed framework integrates CLIP-style contrastive learning with a Similarity-Graph Regularized Optimal Transport (SIGROT) loss to enhance global cross-modal consistency and mitigate modality gap issues. Extensive experiments on three Vietnamese benchmarks (UITOpenViIC, KTVIC, and Crossmodal-3600) demonstrate that ViCLIP-OT consistently outperforms CLIP and SigLIP baselines in both in-domain and zero-shot settings. On UIT-OpenViIC, the model achieves an average Recall@K of 67.34%, improving upon CLIP by 5.75 percentage points. In zero-shot evaluation on Crossmodal-3600, ViCLIPO...

Related Articles

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED
Llms

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED

Ads are rolling out across the US on ChatGPT’s free tier. I asked OpenAI's bot 500 questions to see what these ads were like and how they...

Wired - AI · 9 min ·
Llms

Abacus.Ai Claw LLM consumes an incredible amount of credit without any usage :(

Three days ago, I clicked the "Deploy OpenClaw In Seconds" button to get an overview of the new service, but I didn't build any automatio...

Reddit - Artificial Intelligence · 1 min ·
Google’s Gemini AI app debuts in Hong Kong
Llms

Google’s Gemini AI app debuts in Hong Kong

Tech giant’s chatbot service tops Apple’s app store chart in the city.

AI Tools & Products · 2 min ·
Google Launches Gemini Import Tools to Poach Users From Rival AI Apps
Llms

Google Launches Gemini Import Tools to Poach Users From Rival AI Apps

Anyone looking to switch their AI assistant will find it surprisingly easy, as it only takes a few steps to move from A to B. This is not...

AI Tools & Products · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime