[2603.00436] ROKA: Robust Knowledge Unlearning against Adversaries

[2603.00436] ROKA: Robust Knowledge Unlearning against Adversaries

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.00436: ROKA: Robust Knowledge Unlearning against Adversaries

Computer Science > Machine Learning arXiv:2603.00436 (cs) [Submitted on 28 Feb 2026] Title:ROKA: Robust Knowledge Unlearning against Adversaries Authors:Jinmyeong Shin, Joshua Tapia, Nicholas Ferreira, Gabriel Diaz, Moayed Daneshyari, Hyeran Jeon View a PDF of the paper titled ROKA: Robust Knowledge Unlearning against Adversaries, by Jinmyeong Shin and 5 other authors View PDF HTML (experimental) Abstract:The need for machine unlearning is critical for data privacy, yet existing methods often cause Knowledge Contamination by unintentionally damaging related knowledge. Such a degraded model performance after unlearning has been recently leveraged for new inference and backdoor attacks. Most studies design adversarial unlearning requests that require poisoning or duplicating training data. In this study, we introduce a new unlearning-induced attack model, namely indirect unlearning attack, which does not require data manipulation but exploits the consequence of knowledge contamination to perturb the model accuracy on security-critical predictions. To mitigate this attack, we introduce a theoretical framework that models neural networks as Neural Knowledge Systems. Based on this, we propose ROKA, a robust unlearning strategy centered on Neural Healing. Unlike conventional unlearning methods that only destroy information, ROKA constructively rebalances the model by nullifying the influence of forgotten data while strengthening its conceptual neighbors. To the best of our knowl...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

The Download: gig workers training humanoids, and better AI benchmarks | MIT Technology Review
Machine Learning

The Download: gig workers training humanoids, and better AI benchmarks | MIT Technology Review

OpenAI has closed Silicon Valley's largest-ever funding round.

MIT Technology Review - AI · 6 min ·
Machine Learning

[D] How do ML engineers view vibe coding?

I've seen, read and heard a lot of mixed reactions about software engineers (ie. the ones who aren't building ML models and make purely d...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] I built a simple gpu-aware single-node job scheduler for researchers / students

(reposting in my main account because anonymous account cannot post here.) Hi everyone! I’m a research engineer from a small lab in Asia,...

Reddit - Machine Learning · 1 min ·
Llms

[For Hire] Junior AI/ML Engineer | RAG · LLMs · FastAPI · Vector DBs | Remote

Posting this for a friend who isn't on Reddit. A recent graduate, entry level, no commercial production experience but spent the past yea...

Reddit - ML Jobs · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime