[2506.15307] SecP-Tuning: Efficient Privacy-Preserving Prompt Tuning for Large Language Models via MPC
About this article
Abstract page for arXiv paper 2506.15307: SecP-Tuning: Efficient Privacy-Preserving Prompt Tuning for Large Language Models via MPC
Computer Science > Machine Learning arXiv:2506.15307 (cs) [Submitted on 18 Jun 2025 (v1), last revised 28 Feb 2026 (this version, v3)] Title:SecP-Tuning: Efficient Privacy-Preserving Prompt Tuning for Large Language Models via MPC Authors:Jinglong Luo, Zhuo Zhang, Yehong Zhang, Shiyu Liu, Ye Dong, Hui Wang, Yue Yu, Xun Zhou, Zenglin Xu View a PDF of the paper titled SecP-Tuning: Efficient Privacy-Preserving Prompt Tuning for Large Language Models via MPC, by Jinglong Luo and 8 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have revolutionized numerous fields, yet their adaptation to specialized tasks in privacy-sensitive domains such as healthcare and finance remains constrained due to the scarcity of accessible training data caused by stringent privacy requirements. Secure Multi-party Computation (MPC)-based privacy-preserving machine learning provides theoretical guarantees for the privacy of model parameters and data. However, its application to LLMs has been predominantly limited to inference, as fine-tuning introduces significant efficiency challenges, particularly in backward propagation, optimizer, and self-attention operations. To address these challenges, we propose SecP-Tuning, the first MPC-based framework designed for efficient, privacy-preserving prompt tuning of LLMs. SecP-Tuning innovatively integrates Forward-only Tuning (FoT) through the ``data owner-server interaction" paradigm, effectively removing the need for privacy-p...