[2602.15866] NLP Privacy Risk Identification in Social Media (NLP-PRISM): A Survey

[2602.15866] NLP Privacy Risk Identification in Social Media (NLP-PRISM): A Survey

arXiv - AI 4 min read Article

Summary

This survey presents the NLP-PRISM framework for identifying privacy risks in social media NLP applications, analyzing 203 peer-reviewed papers to highlight vulnerabilities and propose solutions.

Why It Matters

As NLP technologies become integral to social media, understanding privacy risks is crucial for ethical AI deployment. This survey provides a systematic approach to assess vulnerabilities, helping researchers and practitioners improve privacy measures in their applications.

Key Takeaways

  • The NLP-PRISM framework evaluates privacy risks across six dimensions.
  • Significant gaps in privacy research were identified in various NLP tasks.
  • Transformer models show a trade-off between utility and privacy preservation.

Computer Science > Computation and Language arXiv:2602.15866 (cs) [Submitted on 26 Jan 2026] Title:NLP Privacy Risk Identification in Social Media (NLP-PRISM): A Survey Authors:Dhiman Goswami, Jai Kruthunz Naveen Kumar, Sanchari Das View a PDF of the paper titled NLP Privacy Risk Identification in Social Media (NLP-PRISM): A Survey, by Dhiman Goswami and 2 other authors View PDF HTML (experimental) Abstract:Natural Language Processing (NLP) is integral to social media analytics but often processes content containing Personally Identifiable Information (PII), behavioral cues, and metadata raising privacy risks such as surveillance, profiling, and targeted advertising. To systematically assess these risks, we review 203 peer-reviewed papers and propose the NLP Privacy Risk Identification in Social Media (NLP-PRISM) framework, which evaluates vulnerabilities across six dimensions: data collection, preprocessing, visibility, fairness, computational risk, and regulatory compliance. Our analysis shows that transformer models achieve F1-scores ranging from 0.58-0.84, but incur a 1% - 23% drop under privacy-preserving fine-tuning. Using NLP-PRISM, we examine privacy coverage in six NLP tasks: sentiment analysis (16), emotion detection (14), offensive language identification (19), code-mixed processing (39), native language identification (29), and dialect detection (24) revealing substantial gaps in privacy research. We further found a (reduced by 2% - 9%) trade-off in model utili...

Related Articles

Generative Ai

Midjourney has a new offer on the cancel page there is 20 off for 2 months

submitted by /u/RainDragonfly826 [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Walmart CEO reportedly brags that company's in-app AI agent is making people spend 35% more money
Nlp

Walmart CEO reportedly brags that company's in-app AI agent is making people spend 35% more money

AI Tools & Products · 4 min ·
Llms

[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry

Hi r/MachineLearning, I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed ...

Reddit - Machine Learning · 1 min ·
Nlp

[D] KDD Review Discussion

KDD 2026 (Feb Cycle) reviews will release today (4-April AoE), This thread is open to discuss about reviews and importantly celebrate suc...

Reddit - Machine Learning · 1 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime