[2603.19302] Parameter-Efficient Token Embedding Editing for Clinical Class-Level Unlearning
About this article
Abstract page for arXiv paper 2603.19302: Parameter-Efficient Token Embedding Editing for Clinical Class-Level Unlearning
Computer Science > Machine Learning arXiv:2603.19302 (cs) [Submitted on 11 Mar 2026] Title:Parameter-Efficient Token Embedding Editing for Clinical Class-Level Unlearning Authors:Iyad Ait Hou, Shrenik Borad, Harsh Sharma, Pooja Srinivasan, Rebecca Hwa, Aya Zirikly View a PDF of the paper titled Parameter-Efficient Token Embedding Editing for Clinical Class-Level Unlearning, by Iyad Ait Hou and 5 other authors View PDF HTML (experimental) Abstract:Machine unlearning is increasingly important for clinical language models, where privacy regulations and institutional policies may require removing sensitive information from deployed systems without retraining from scratch. In practice, deletion requests must balance effective forgetting of targeted information with preservation of model utility and minimal parameter modification. We introduce Sparse Token Embedding Unlearning (STEU), a parameter-efficient method for behavioral class-level unlearning that updates only PMI-selected token embeddings together with a small classifier head while keeping all encoder layers frozen. Across experiments on MIMIC-IV, MIMIC-III, and eICU using BioClinicalBERT, BERT-base, and DistilBERT, STEU consistently suppresses the target class while largely preserving retained task performance. In the primary MIMIC-IV setting, STEU achieves near-complete forgetting (forget F1 = 0.0004) while maintaining competitive retained utility (retain avg F1 = 0.4766) after modifying only 0.19\% of model parameter...