[2401.00870] ConfusionPrompt: Practical Private Inference for Online Large Language Models

[2401.00870] ConfusionPrompt: Practical Private Inference for Online Large Language Models

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2401.00870: ConfusionPrompt: Practical Private Inference for Online Large Language Models

Computer Science > Cryptography and Security arXiv:2401.00870 (cs) [Submitted on 30 Dec 2023 (v1), last revised 8 Apr 2026 (this version, v5)] Title:ConfusionPrompt: Practical Private Inference for Online Large Language Models Authors:Peihua Mai, Youjia Yang, Ran Yan, Rui Ye, Yan Pang View a PDF of the paper titled ConfusionPrompt: Practical Private Inference for Online Large Language Models, by Peihua Mai and 4 other authors View PDF HTML (experimental) Abstract:State-of-the-art large language models (LLMs) are typically deployed as online services, requiring users to transmit detailed prompts to cloud servers. This raises significant privacy concerns. In response, we introduce ConfusionPrompt, a novel framework for private LLM inference that protects user privacy by: (i) decomposing the original prompt into smaller sub-prompts, and (ii) generating pseudo-prompts alongside the genuine sub-prompts, which are then sent to the LLM. The server responses are later recomposed by the user to reconstruct the final output. This approach offers key advantages over previous LLM privacy protection methods: (i) it integrates seamlessly with existing black-box LLMs, and (ii) it delivers a significantly improved privacy-utility trade-off compared to existing text perturbation methods. We also develop a $(\lambda, \mu, \rho)$-privacy model to formulate the requirements for a privacy-preserving group of prompts and provide a complexity analysis to justify the role of prompt decomposition....

Originally published on April 09, 2026. Curated by AI News.

Related Articles

Llms

Diffusion for generating/editing ASTs? [D]

I’m not a machine learning expert or anything, but I do enjoy learning about how it all works. I’ve noticed that one of the main limitati...

Reddit - Machine Learning · 1 min ·
ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns | The Verge
Llms

ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns | The Verge

OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and s...

The Verge - AI · 4 min ·
Llms

AI is helpful but still not “there” yet

what I mean is that every time I use Claude, or Grok or any of the AI platforms and tools, I realize how far this technology is from repl...

Reddit - Artificial Intelligence · 1 min ·
ChatGPT Has 'Goblin' Mania in the US. In China It Will 'Catch You Steadily' | WIRED
Llms

ChatGPT Has 'Goblin' Mania in the US. In China It Will 'Catch You Steadily' | WIRED

OpenAI's chatbot has some weird linguistic tics in Chinese that are driving users crazy.

Wired - AI · 8 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime