[2602.15866] NLP Privacy Risk Identification in Social Media (NLP-PRISM): A Survey
Summary
This survey presents the NLP-PRISM framework for identifying privacy risks in social media NLP applications, analyzing 203 peer-reviewed papers to highlight vulnerabilities and propose solutions.
Why It Matters
As NLP technologies become integral to social media, understanding privacy risks is crucial for ethical AI deployment. This survey provides a systematic approach to assess vulnerabilities, helping researchers and practitioners improve privacy measures in their applications.
Key Takeaways
- The NLP-PRISM framework evaluates privacy risks across six dimensions.
- Significant gaps in privacy research were identified in various NLP tasks.
- Transformer models show a trade-off between utility and privacy preservation.
Computer Science > Computation and Language arXiv:2602.15866 (cs) [Submitted on 26 Jan 2026] Title:NLP Privacy Risk Identification in Social Media (NLP-PRISM): A Survey Authors:Dhiman Goswami, Jai Kruthunz Naveen Kumar, Sanchari Das View a PDF of the paper titled NLP Privacy Risk Identification in Social Media (NLP-PRISM): A Survey, by Dhiman Goswami and 2 other authors View PDF HTML (experimental) Abstract:Natural Language Processing (NLP) is integral to social media analytics but often processes content containing Personally Identifiable Information (PII), behavioral cues, and metadata raising privacy risks such as surveillance, profiling, and targeted advertising. To systematically assess these risks, we review 203 peer-reviewed papers and propose the NLP Privacy Risk Identification in Social Media (NLP-PRISM) framework, which evaluates vulnerabilities across six dimensions: data collection, preprocessing, visibility, fairness, computational risk, and regulatory compliance. Our analysis shows that transformer models achieve F1-scores ranging from 0.58-0.84, but incur a 1% - 23% drop under privacy-preserving fine-tuning. Using NLP-PRISM, we examine privacy coverage in six NLP tasks: sentiment analysis (16), emotion detection (14), offensive language identification (19), code-mixed processing (39), native language identification (29), and dialect detection (24) revealing substantial gaps in privacy research. We further found a (reduced by 2% - 9%) trade-off in model utili...