[2603.19451] LoFi: Location-Aware Fine-Grained Representation Learning for Chest X-ray
About this article
Abstract page for arXiv paper 2603.19451: LoFi: Location-Aware Fine-Grained Representation Learning for Chest X-ray
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.19451 (cs) [Submitted on 19 Mar 2026] Title:LoFi: Location-Aware Fine-Grained Representation Learning for Chest X-ray Authors:Myeongkyun Kang, Yanting Yang, Xiaoxiao Li View a PDF of the paper titled LoFi: Location-Aware Fine-Grained Representation Learning for Chest X-ray, by Myeongkyun Kang and 2 other authors View PDF HTML (experimental) Abstract:Fine-grained representation learning is crucial for retrieval and phrase grounding in chest X-rays, where clinically relevant findings are often spatially confined. However, the lack of region-level supervision in contrastive models and the limited ability of large vision language models to capture fine-grained representations in external validation lead to suboptimal performance on these tasks. To address these limitations, we propose Location-aware Fine-grained representation learning (LoFi), which jointly optimizes sigmoid, captioning, and location-aware captioning losses using a lightweight large language model. The location-aware captioning loss enables region-level supervision through grounding and dense captioning objectives, thereby facilitating fine-grained representation learning. Building upon these representations, we integrate a fine-grained encoder into retrieval-based in-context learning to enhance chest X-ray grounding across diverse settings. Extensive experiments demonstrate that our method achieves superior retrieval and phrase grounding pe...