[2602.12806] RAT-Bench: A Comprehensive Benchmark for Text Anonymization
Summary
RAT-Bench introduces a comprehensive benchmark for evaluating text anonymization tools based on their effectiveness in preventing re-identification, highlighting the limitations of current methods and offering insights for future improvements.
Why It Matters
As personal data usage in AI increases, ensuring effective anonymization is crucial to protect privacy. RAT-Bench addresses the gap in evaluating anonymization tools, providing a framework that can enhance privacy measures in machine learning applications.
Key Takeaways
- RAT-Bench evaluates text anonymization tools based on re-identification risk.
- Current anonymization methods are often inadequate, especially with non-standard identifiers.
- LLM-based anonymizers offer better privacy-utility trade-offs but at higher computational costs.
- The benchmark supports evaluation across different languages and demographics.
- Future anonymization tools should focus on improving effectiveness and expanding geographic applicability.
Computer Science > Computation and Language arXiv:2602.12806 (cs) [Submitted on 13 Feb 2026] Title:RAT-Bench: A Comprehensive Benchmark for Text Anonymization Authors:Nataša Krčo, Zexi Yao, Matthieu Meeus, Yves-Alexandre de Montjoye View a PDF of the paper titled RAT-Bench: A Comprehensive Benchmark for Text Anonymization, by Nata\v{s}a Kr\v{c}o and 3 other authors View PDF HTML (experimental) Abstract:Data containing personal information is increasingly used to train, fine-tune, or query Large Language Models (LLMs). Text is typically scrubbed of identifying information prior to use, often with tools such as Microsoft's Presidio or Anthropic's PII purifier. These tools have traditionally been evaluated on their ability to remove specific identifiers (e.g., names), yet their effectiveness at preventing re-identification remains unclear. We introduce RAT-Bench, a comprehensive benchmark for text anonymization tools based on re-identification risk. Using U.S. demographic statistics, we generate synthetic text containing various direct and indirect identifiers across domains, languages, and difficulty levels. We evaluate a range of NER- and LLM-based text anonymization tools and, based on the attributes an LLM-based attacker is able to correctly infer from the anonymized text, we report the risk of re-identification in the U.S. population, while properly accounting for the disparate impact of identifiers. We find that, while capabilities vary widely, even the best tools are f...