[2604.06831] Towards Privacy-Preserving Large Language Model: Text-free Inference Through Alignment and Adaptation
About this article
Abstract page for arXiv paper 2604.06831: Towards Privacy-Preserving Large Language Model: Text-free Inference Through Alignment and Adaptation
Computer Science > Cryptography and Security arXiv:2604.06831 (cs) [Submitted on 8 Apr 2026] Title:Towards Privacy-Preserving Large Language Model: Text-free Inference Through Alignment and Adaptation Authors:Jeongho Yoon, Chanhee Park, Yongchan Chun, Hyeonseok Moon, Heuiseok Lim View a PDF of the paper titled Towards Privacy-Preserving Large Language Model: Text-free Inference Through Alignment and Adaptation, by Jeongho Yoon and 4 other authors View PDF HTML (experimental) Abstract:Current LLM-based services typically require users to submit raw text regardless of its sensitivity. While intuitive, such practice introduces substantial privacy risks, as unauthorized access may expose personal, medical, or legal information. Although prior defenses strived to mitigate these risks, they often incur substantial computational overhead and degrade model performance. To overcome this privacy-efficiency trade-off, we introduce Privacy-Preserving Fine-Tuning (PPFT), a novel training pipeline that eliminates the need for transmitting raw prompt text while maintaining a favorable balance between privacy preservation and model utility for both clients and service providers. Our approach operates in two stages: first, we train a client-side encoder together with a server-side projection module and LLM, enabling the server to condition on k-pooled prompt embeddings instead of raw text; second, we fine-tune the projection module and LLM on private, domain-specific data using noise-injec...