[2508.04097] Do Vision-Language Models Leak What They Learn? Adaptive Token-Weighted Model Inversion Attacks
About this article
Abstract page for arXiv paper 2508.04097: Do Vision-Language Models Leak What They Learn? Adaptive Token-Weighted Model Inversion Attacks
Computer Science > Machine Learning arXiv:2508.04097 (cs) [Submitted on 6 Aug 2025 (v1), last revised 1 Mar 2026 (this version, v3)] Title:Do Vision-Language Models Leak What They Learn? Adaptive Token-Weighted Model Inversion Attacks Authors:Ngoc-Bao Nguyen, Sy-Tuyen Ho, Koh Jun Hao, Ngai-Man Cheung View a PDF of the paper titled Do Vision-Language Models Leak What They Learn? Adaptive Token-Weighted Model Inversion Attacks, by Ngoc-Bao Nguyen and 3 other authors View PDF HTML (experimental) Abstract:Model inversion (MI) attacks pose significant privacy risks by reconstructing private training data from trained neural networks. While prior studies have primarily examined unimodal deep networks, the vulnerability of vision-language models (VLMs) remains largely unexplored. In this work, we present the first systematic study of MI attacks on VLMs to understand their susceptibility to leaking private visual training data. Our work makes two main contributions. First, tailored to the token-generative nature of VLMs, we introduce a suite of token-based and sequence-based model inversion strategies, providing a comprehensive analysis of VLMs' vulnerability under different attack formulations. Second, based on the observation that tokens vary in their visual grounding, and hence their gradients differ in informativeness for image reconstruction, we propose Sequence-based Model Inversion with Adaptive Token Weighting (SMI-AW) as a novel MI for VLMs. SMI-AW dynamically reweights e...