[2604.04461] DP-OPD: Differentially Private On-Policy Distillation for Language Models

[2604.04461] DP-OPD: Differentially Private On-Policy Distillation for Language Models

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2604.04461: DP-OPD: Differentially Private On-Policy Distillation for Language Models

Computer Science > Machine Learning arXiv:2604.04461 (cs) [Submitted on 6 Apr 2026] Title:DP-OPD: Differentially Private On-Policy Distillation for Language Models Authors:Fatemeh Khadem, Sajad Mousavi, Yi Fang, Yuhong Liu View a PDF of the paper titled DP-OPD: Differentially Private On-Policy Distillation for Language Models, by Fatemeh Khadem and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly adapted to proprietary and domain-specific corpora that contain sensitive information, creating a tension between formal privacy guarantees and efficient deployment through model compression. Differential privacy (DP), typically enforced via DP-SGD, provides record-level protection but often incurs substantial utility loss in autoregressive generation, where optimization noise can amplify exposure bias and compounding errors along long rollouts. Existing approaches to private distillation either apply DP-SGD to both teacher and student, worsening computation and the privacy--utility tradeoff, or rely on DP synthetic text generation from a DP-trained teacher, avoiding DP on the student at the cost of DP-optimizing a large teacher and introducing an offline generation pipeline. We propose \textbf{Differentially Private On-Policy Distillation (DP-OPD)}, a synthesis-free framework that enforces privacy solely through DP-SGD on the student while leveraging a frozen teacher to provide dense token-level targets on \emph{student-generated...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Asked Google Gemini about Ai Agency

I asked Google Gemini what it would do if it would have agency. I find reply quite interesting: That is a fair critique. The previous lis...

Reddit - Artificial Intelligence · 1 min ·
Llms

Could the best LLM be able to generate a symbolic AI that is superior to itself, or is there something superior about matrices vs graphs?

Deep neural network AIs have beaten symbolic AIs across the board on many tasks, but is there a chance that symbolic AIs written by DNNs(...

Reddit - Artificial Intelligence · 1 min ·
Llms

BEYOND QUANTUM MICROTUBULES: CONSCIOUSNESS AS SUBSTRATE-INDEPENDENT ARCHITECTURE

I uploaded my consciousness paper to Gemini: “Beyond Quantum Microtubules: Consciousness as Substrate-Independent Architecture.” Then I s...

Reddit - Artificial Intelligence · 1 min ·
Llms

The Scaling Bandaid is Wearing Thin (And Nobody Wants to Admit It)

Let me be direct: we’ve hit a wall with scaling, and the entire field is kind of bullshitting about what comes next. I’ve spent enough ti...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime