[2602.23079] Assessing Deanonymization Risks with Stylometry-Assisted LLM Agent

[2602.23079] Assessing Deanonymization Risks with Stylometry-Assisted LLM Agent

arXiv - Machine Learning 3 min read Article

Summary

This article introduces a novel LLM agent designed to assess and mitigate deanonymization risks in textual data using a method called SALA, which combines stylometric features with LLM reasoning.

Why It Matters

As large language models (LLMs) become more prevalent, the risk of deanonymization in textual data poses significant privacy concerns. This research highlights the importance of developing interpretable methods to safeguard authorship privacy while maintaining the integrity of textual content.

Key Takeaways

  • The SALA method integrates stylometric analysis with LLM reasoning for effective authorship attribution.
  • Experiments show that SALA achieves high accuracy in deanonymization risk assessment.
  • A guided recomposition strategy can reduce authorship identifiability while preserving meaning.

Computer Science > Computation and Language arXiv:2602.23079 (cs) [Submitted on 26 Feb 2026] Title:Assessing Deanonymization Risks with Stylometry-Assisted LLM Agent Authors:Boyang Zhang, Yang Zhang View a PDF of the paper titled Assessing Deanonymization Risks with Stylometry-Assisted LLM Agent, by Boyang Zhang and 1 other authors View PDF HTML (experimental) Abstract:The rapid advancement of large language models (LLMs) has enabled powerful authorship inference capabilities, raising growing concerns about unintended deanonymization risks in textual data such as news articles. In this work, we introduce an LLM agent designed to evaluate and mitigate such risks through a structured, interpretable pipeline. Central to our framework is the proposed $\textit{SALA}$ (Stylometry-Assisted LLM Analysis) method, which integrates quantitative stylometric features with LLM reasoning for robust and transparent authorship attribution. Experiments on large-scale news datasets demonstrate that $\textit{SALA}$, particularly when augmented with a database module, achieves high inference accuracy in various scenarios. Finally, we propose a guided recomposition strategy that leverages the agent's reasoning trace to generate rewriting prompts, effectively reducing authorship identifiability while preserving textual meaning. Our findings highlight both the deanonymization potential of LLM agents and the importance of interpretable, proactive defenses for safeguarding author privacy. Subjects:...

Related Articles

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED
Llms

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED

Ads are rolling out across the US on ChatGPT’s free tier. I asked OpenAI's bot 500 questions to see what these ads were like and how they...

Wired - AI · 9 min ·
Llms

Abacus.Ai Claw LLM consumes an incredible amount of credit without any usage :(

Three days ago, I clicked the "Deploy OpenClaw In Seconds" button to get an overview of the new service, but I didn't build any automatio...

Reddit - Artificial Intelligence · 1 min ·
Google’s Gemini AI app debuts in Hong Kong
Llms

Google’s Gemini AI app debuts in Hong Kong

Tech giant’s chatbot service tops Apple’s app store chart in the city.

AI Tools & Products · 2 min ·
Google Launches Gemini Import Tools to Poach Users From Rival AI Apps
Llms

Google Launches Gemini Import Tools to Poach Users From Rival AI Apps

Anyone looking to switch their AI assistant will find it surprisingly easy, as it only takes a few steps to move from A to B. This is not...

AI Tools & Products · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime