[2602.20114] Benchmarking Unlearning for Vision Transformers

[2602.20114] Benchmarking Unlearning for Vision Transformers

arXiv - AI 4 min read Article

Summary

This article presents a benchmarking study on unlearning algorithms for Vision Transformers (VTs), highlighting their performance compared to CNNs and establishing a reference baseline for future research.

Why It Matters

As machine unlearning becomes essential for developing safe AI, this research fills a gap by evaluating unlearning techniques specifically for Vision Transformers. It provides a foundational benchmark that can guide future studies and applications in AI safety and fairness.

Key Takeaways

  • The study benchmarks unlearning algorithms across different Vision Transformer architectures.
  • It assesses the impact of dataset scale and complexity on unlearning performance.
  • Unified evaluation metrics are introduced to measure forget quality and accuracy.
  • The research reveals how Vision Transformers memorize training data compared to CNNs.
  • Establishes a reference performance baseline for future unlearning algorithm comparisons.

Computer Science > Computer Vision and Pattern Recognition arXiv:2602.20114 (cs) [Submitted on 23 Feb 2026] Title:Benchmarking Unlearning for Vision Transformers Authors:Kairan Zhao, Iurie Luca, Peter Triantafillou View a PDF of the paper titled Benchmarking Unlearning for Vision Transformers, by Kairan Zhao and 2 other authors View PDF HTML (experimental) Abstract:Research in machine unlearning (MU) has gained strong momentum: MU is now widely regarded as a critical capability for building safe and fair AI. In parallel, research into transformer architectures for computer vision tasks has been highly successful: Increasingly, Vision Transformers (VTs) emerge as strong alternatives to CNNs. Yet, MU research for vision tasks has largely centered on CNNs, not VTs. While benchmarking MU efforts have addressed LLMs, diffusion models, and CNNs, none exist for VTs. This work is the first to attempt this, benchmarking MU algorithm performance in different VT families (ViT and Swin-T) and at different capacities. The work employs (i) different datasets, selected to assess the impacts of dataset scale and complexity; (ii) different MU algorithms, selected to represent fundamentally different approaches for MU; and (iii) both single-shot and continual unlearning protocols. Additionally, it focuses on benchmarking MU algorithms that leverage training data memorization, since leveraging memorization has been recently discovered to significantly improve the performance of previously SO...

Related Articles

Machine Learning

[D] MXFP8 GEMM: Up to 99% of cuBLAS performance using CUDA + PTX

New blog post by Daniel Vega-Myhre (Meta/PyTorch) illustrating GEMM design for FP8, including deep-dives into all the constraints and des...

Reddit - Machine Learning · 1 min ·
IIT Delhi launches 8th batch of Advanced AI, ML, and DL online programme: Check who is eligible, applicat
Machine Learning

IIT Delhi launches 8th batch of Advanced AI, ML, and DL online programme: Check who is eligible, applicat

News News: The Continuing Education Programme (CEP) at IIT Delhi has announced the launch of the 8th batch of its Advanced Certificate Pr...

AI News - General · 9 min ·
Chamco Digital Launches Microsoft AI and Cloud Technology Training Program with Board-Endorsed Strategic Expansion
Machine Learning

Chamco Digital Launches Microsoft AI and Cloud Technology Training Program with Board-Endorsed Strategic Expansion

Chamco Digital, a recognized Microsoft AI and Cloud Technology Partner, announced the launch of its globally accessible Microsoft AI and ...

AI News - General · 4 min ·
FPT Wins AI & Machine Learning Innovation Award at 2026 InsurInnovator Connect Asia Awards
Machine Learning

FPT Wins AI & Machine Learning Innovation Award at 2026 InsurInnovator Connect Asia Awards

HANOI, Vietnam--(BUSINESS WIRE)--Mar 30, 2026--

AI News - General · 13 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime