[2512.03310] Randomized Masked Finetuning: An Efficient Way to Mitigate Memorization of PIIs in LLMs

[2512.03310] Randomized Masked Finetuning: An Efficient Way to Mitigate Memorization of PIIs in LLMs

arXiv - Machine Learning 3 min read Article

Summary

The paper introduces Randomized Masked Finetuning (RMFT), a technique designed to reduce the memorization of personally identifiable information (PIIs) in large language models (LLMs), demonstrating significant performance improvements over traditional methods.

Why It Matters

As LLMs increasingly integrate into various applications, the risk of PII memorization poses serious privacy concerns. RMFT presents a viable solution that balances privacy preservation with model performance, making it crucial for developers and researchers focused on AI safety and ethics.

Key Takeaways

  • RMFT reduces PII memorization significantly while maintaining model performance.
  • The technique showed an 80% reduction in PII extraction rates compared to baseline methods.
  • Introduces MaxTER, a new evaluation framework for assessing privacy-utility tradeoffs.

Computer Science > Computation and Language arXiv:2512.03310 (cs) [Submitted on 2 Dec 2025 (v1), last revised 18 Feb 2026 (this version, v3)] Title:Randomized Masked Finetuning: An Efficient Way to Mitigate Memorization of PIIs in LLMs Authors:Kunj Joshi, David A. Smith View a PDF of the paper titled Randomized Masked Finetuning: An Efficient Way to Mitigate Memorization of PIIs in LLMs, by Kunj Joshi and 1 other authors View PDF HTML (experimental) Abstract:The current literature on memorization in Natural Language Models, especially Large Language Models (LLMs), poses severe security and privacy risks, as models tend to memorize personally identifying information (PIIs) from training data. We introduce Randomized Masked Fine-Tuning (RMFT), a novel privacy-preserving fine-tuning technique that reduces PII memorization while minimizing performance impact. Using the Enron Email Dataset, we demonstrate that RMFT achieves an 80.81% reduction in Total Extraction Rate and 80.17% reduction in Seen Extraction Rate compared to baseline fine-tuning, outperforming deduplication methods while maintaining only a 5.73% increase in perplexity. We present MaxTER, a Pareto-optimal evaluation framework for assessing privacy-utility tradeoffs, and show the performance of RMFT vs Deduplication by Area Under The Response Curve (AURC) metric. Subjects: Computation and Language (cs.CL); Cryptography and Security (cs.CR); Machine Learning (cs.LG) Cite as: arXiv:2512.03310 [cs.CL]   (or arXiv:251...

Related Articles

Llms

Kept hitting ChatGPT and Claude limits during real work. This is the free setup I ended up using

I do a lot of writing and random problem solving for work. Mostly long drafts, edits, and breaking down ideas. Around Jan I kept hitting ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Is ChatGPT changing the way we think too much already?

Back in the day, I got ChatGPT Plus mostly for work and to help me write better and do stuff faster. But now I use it for almost everythi...

Reddit - Artificial Intelligence · 1 min ·
Llms

Will people continue paying for the plans after the honeymoon is over?

I currently pay for Max 20x and the demand at work is so high that I can only get everything I need done because I have access to Claude....

Reddit - Artificial Intelligence · 1 min ·
Llms

Nvidia goes all-in on AI agents while Anthropic pulls the plug

TLDR: Nvidia is partnering with 17 major companies to build a platform specifically for enterprise AI agents, basically trying to become ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime