[2512.14341] Towards Transferable Defense Against Malicious Image Edits
About this article
Abstract page for arXiv paper 2512.14341: Towards Transferable Defense Against Malicious Image Edits
Computer Science > Computer Vision and Pattern Recognition arXiv:2512.14341 (cs) [Submitted on 16 Dec 2025 (v1), last revised 2 Mar 2026 (this version, v2)] Title:Towards Transferable Defense Against Malicious Image Edits Authors:Jie Zhang, Shuai Dong, Shiguang Shan, Xilin Chen View a PDF of the paper titled Towards Transferable Defense Against Malicious Image Edits, by Jie Zhang and 2 other authors View PDF HTML (experimental) Abstract:Recent approaches employing imperceptible perturbations in input images have demonstrated promising potential to counter malicious manipulations in diffusion-based image editing systems. However, existing methods suffer from limited transferability in cross-model evaluations. To address this, we propose Transferable Defense Against Malicious Image Edits (TDAE), a novel bimodal framework that enhances image immunity against malicious edits through coordinated image-text optimization. Specifically, at the visual defense level, we introduce FlatGrad Defense Mechanism (FDM), which incorporates gradient regularization into the adversarial objective. By explicitly steering the perturbations toward flat minima, FDM amplifies immune robustness against unseen editing models. For textual enhancement protection, we propose an adversarial optimization paradigm named Dynamic Prompt Defense (DPD), which periodically refines text embeddings to align the editing outcomes of immunized images with those of the original images, then updates the images under o...