[2602.23588] Hyperdimensional Cross-Modal Alignment of Frozen Language and Image Models for Efficient Image Captioning
About this article
Abstract page for arXiv paper 2602.23588: Hyperdimensional Cross-Modal Alignment of Frozen Language and Image Models for Efficient Image Captioning
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.23588 (cs) [Submitted on 27 Feb 2026] Title:Hyperdimensional Cross-Modal Alignment of Frozen Language and Image Models for Efficient Image Captioning Authors:Abhishek Dalvi, Vasant Honavar View a PDF of the paper titled Hyperdimensional Cross-Modal Alignment of Frozen Language and Image Models for Efficient Image Captioning, by Abhishek Dalvi and 1 other authors View PDF HTML (experimental) Abstract:Large unimodal foundation models for vision and language encode rich semantic structures, yet aligning them typically requires computationally intensive multimodal fine-tuning. Such approaches depend on large-scale parameter updates, are resource intensive, and can perturb pretrained representations. Emerging evidence suggests, however, that independently trained foundation models may already exhibit latent semantic compatibility, reflecting shared structures in the data they model. This raises a fundamental question: can cross-modal alignment be achieved without modifying the models themselves? Here we introduce HDFLIM (HyperDimensional computing with Frozen Language and Image Models), a framework that establishes cross-modal mappings while keeping pretrained vision and language models fully frozen. HDFLIM projects unimodal embeddings into a shared hyperdimensional space and leverages lightweight symbolic operations -- binding, bundling, and similarity-based retrieval to construct associative cross-modal rep...