[2602.13240] AST-PAC: AST-guided Membership Inference for Code

[2602.13240] AST-PAC: AST-guided Membership Inference for Code

arXiv - AI 3 min read Article

Summary

The paper introduces AST-PAC, a novel method for membership inference attacks on code models, leveraging Abstract Syntax Trees to enhance data governance and copyright compliance.

Why It Matters

As large language models increasingly utilize proprietary code datasets, ensuring compliance with data usage regulations is critical. AST-PAC offers a promising approach to audit these models, addressing significant concerns about unauthorized data usage and enhancing the reliability of provenance auditing.

Key Takeaways

  • AST-PAC improves membership inference attacks by using Abstract Syntax Trees.
  • The method shows better performance on larger, complex files compared to traditional methods.
  • Syntactic awareness is crucial for effective calibration in code language models.
  • PAC methods need to adapt to the unique characteristics of code syntax for better results.
  • Future research should focus on syntax-aware and size-adaptive techniques.

Computer Science > Artificial Intelligence arXiv:2602.13240 (cs) [Submitted on 30 Jan 2026] Title:AST-PAC: AST-guided Membership Inference for Code Authors:Roham Koohestani, Ali Al-Kaswan, Jonathan Katzy, Maliheh Izadi View a PDF of the paper titled AST-PAC: AST-guided Membership Inference for Code, by Roham Koohestani and 3 other authors View PDF HTML (experimental) Abstract:Code Large Language Models are frequently trained on massive datasets containing restrictively licensed source code. This creates urgent data governance and copyright challenges. Membership Inference Attacks (MIAs) can serve as an auditing mechanism to detect unauthorized data usage in models. While attacks like the Loss Attack provide a baseline, more involved methods like Polarized Augment Calibration (PAC) remain underexplored in the code domain. This paper presents an exploratory study evaluating these methods on 3B--7B parameter code models. We find that while PAC generally outperforms the Loss baseline, its effectiveness relies on augmentation strategies that disregard the rigid syntax of code, leading to performance degradation on larger, complex files. To address this, we introduce AST-PAC, a domain-specific adaptation that utilizes Abstract Syntax Tree (AST) based perturbations to generate syntactically valid calibration samples. Preliminary results indicate that AST-PAC improves as syntactic size grows, where PAC degrades, but under-mutates small files and underperforms on alphanumeric-rich ...

Related Articles

Tubi is the first streamer to launch a native app within ChatGPT | TechCrunch
Llms

Tubi is the first streamer to launch a native app within ChatGPT | TechCrunch

Tubi becomes the first streaming service to offer an app integration within ChatGPT, the AI chatbot that millions of users turn to for an...

TechCrunch - AI · 3 min ·
Llms

Anyone out there use Claude Pro/Max at the same time on different screens?

I am asking for feedback ? I’m currently using a Claude paid plan (Pro/Max) and was wondering about the logistics of simultaneous use. Sp...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] The Lyra Technique — A framework for interpreting internal cognitive states in LLMs (Zenodo, open access)

We're releasing a paper on a new framework for reading and interpreting the internal cognitive states of large language models: "The Lyra...

Reddit - Machine Learning · 1 min ·
Llms

Looking to build a production-level AI/ML project (agentic systems), need guidance on what to build

Hi everyone, I’m a final-year undergraduate AI/ML student currently focusing on applied AI / agentic systems. So far, I’ve spent time und...

Reddit - ML Jobs · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime