[2512.03310] Randomized Masked Finetuning: An Efficient Way to Mitigate Memorization of PIIs in LLMs
Summary
The paper introduces Randomized Masked Finetuning (RMFT), a technique designed to reduce the memorization of personally identifiable information (PIIs) in large language models (LLMs), demonstrating significant performance improvements over traditional methods.
Why It Matters
As LLMs increasingly integrate into various applications, the risk of PII memorization poses serious privacy concerns. RMFT presents a viable solution that balances privacy preservation with model performance, making it crucial for developers and researchers focused on AI safety and ethics.
Key Takeaways
- RMFT reduces PII memorization significantly while maintaining model performance.
- The technique showed an 80% reduction in PII extraction rates compared to baseline methods.
- Introduces MaxTER, a new evaluation framework for assessing privacy-utility tradeoffs.
Computer Science > Computation and Language arXiv:2512.03310 (cs) [Submitted on 2 Dec 2025 (v1), last revised 18 Feb 2026 (this version, v3)] Title:Randomized Masked Finetuning: An Efficient Way to Mitigate Memorization of PIIs in LLMs Authors:Kunj Joshi, David A. Smith View a PDF of the paper titled Randomized Masked Finetuning: An Efficient Way to Mitigate Memorization of PIIs in LLMs, by Kunj Joshi and 1 other authors View PDF HTML (experimental) Abstract:The current literature on memorization in Natural Language Models, especially Large Language Models (LLMs), poses severe security and privacy risks, as models tend to memorize personally identifying information (PIIs) from training data. We introduce Randomized Masked Fine-Tuning (RMFT), a novel privacy-preserving fine-tuning technique that reduces PII memorization while minimizing performance impact. Using the Enron Email Dataset, we demonstrate that RMFT achieves an 80.81% reduction in Total Extraction Rate and 80.17% reduction in Seen Extraction Rate compared to baseline fine-tuning, outperforming deduplication methods while maintaining only a 5.73% increase in perplexity. We present MaxTER, a Pareto-optimal evaluation framework for assessing privacy-utility tradeoffs, and show the performance of RMFT vs Deduplication by Area Under The Response Curve (AURC) metric. Subjects: Computation and Language (cs.CL); Cryptography and Security (cs.CR); Machine Learning (cs.LG) Cite as: arXiv:2512.03310 [cs.CL] (or arXiv:251...