[2603.26316] SALMUBench: A Benchmark for Sensitive Association-Level Multimodal Unlearning
About this article
Abstract page for arXiv paper 2603.26316: SALMUBench: A Benchmark for Sensitive Association-Level Multimodal Unlearning
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.26316 (cs) [Submitted on 27 Mar 2026] Title:SALMUBench: A Benchmark for Sensitive Association-Level Multimodal Unlearning Authors:Cai Selvas-Sala, Lei Kang, Lluis Gomez View a PDF of the paper titled SALMUBench: A Benchmark for Sensitive Association-Level Multimodal Unlearning, by Cai Selvas-Sala and 2 other authors View PDF HTML (experimental) Abstract:As multimodal models like CLIP become integral to downstream systems, the need to remove sensitive information is critical. However, machine unlearning for contrastively-trained encoders remains underexplored, and existing evaluations fail to diagnose fine-grained, association-level forgetting. We introduce SALMUBench (Sensitive Association-Level Multimodal Unlearning), a benchmark built upon a synthetic dataset of 60K persona-attribute associations and two foundational models: a Compromised model polluted with this data, and a Clean model without it. To isolate unlearning effects, both are trained from scratch on the same 400M-pair retain base, with the Compromised model additionally trained on the sensitive set. We propose a novel evaluation protocol with structured holdout sets (holdout identity, holdout association) to precisely measure unlearning efficacy and collateral damage. Our benchmark reveals that while utility-efficient deletion is feasible, current methods exhibit distinct failure modes: they either fail to forget effectively or over-general...