[2602.17284] Efficient privacy loss accounting for subsampling and random allocation

[2602.17284] Efficient privacy loss accounting for subsampling and random allocation

arXiv - Machine Learning 4 min read Article

Summary

This paper presents an efficient method for privacy loss accounting in subsampling and random allocation, demonstrating advantages over traditional Poisson sampling in differential privacy contexts.

Why It Matters

As data privacy becomes increasingly critical in machine learning, this research addresses the limitations of existing privacy accounting methods. By improving the efficiency and accuracy of privacy loss calculations, it enhances the utility of differentially private algorithms, which is essential for maintaining user trust and compliance with regulations.

Key Takeaways

  • Introduces a new method for privacy loss accounting in subsampling.
  • Shows that random allocation can outperform Poisson sampling in differential privacy settings.
  • Develops tools for accurate privacy loss accounting, reducing manual analysis requirements.
  • Demonstrates improved privacy-utility trade-offs for training via DP-SGD.
  • Addresses significant shortcomings in current theoretical analyses of privacy parameters.

Computer Science > Machine Learning arXiv:2602.17284 (cs) [Submitted on 19 Feb 2026] Title:Efficient privacy loss accounting for subsampling and random allocation Authors:Vitaly Feldman, Moshe Shenfeld View a PDF of the paper titled Efficient privacy loss accounting for subsampling and random allocation, by Vitaly Feldman and Moshe Shenfeld View PDF HTML (experimental) Abstract:We consider the privacy amplification properties of a sampling scheme in which a user's data is used in $k$ steps chosen randomly and uniformly from a sequence (or set) of $t$ steps. This sampling scheme has been recently applied in the context of differentially private optimization (Chua et al., 2024a; Choquette-Choo et al., 2025) and communication-efficient high-dimensional private aggregation (Asi et al., 2025), where it was shown to have utility advantages over the standard Poisson sampling. Theoretical analyses of this sampling scheme (Feldman & Shenfeld, 2025; Dong et al., 2025) lead to bounds that are close to those of Poisson sampling, yet still have two significant shortcomings. First, in many practical settings, the resulting privacy parameters are not tight due to the approximation steps in the analysis. Second, the computed parameters are either the hockey stick or Renyi divergence, both of which introduce overheads when used in privacy loss accounting. In this work, we demonstrate that the privacy loss distribution (PLD) of random allocation applied to any differentially private algorit...

Related Articles

Machine Learning

[R] ICML Anonymized git repos for rebuttal

A number of the papers I'm reviewing for have submitted additional figures and code through anonymized git repos (e.g. https://anonymous....

Reddit - Machine Learning · 1 min ·
Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime