[2603.05035] Good-Enough LLM Obfuscation (GELO)
About this article
Abstract page for arXiv paper 2603.05035: Good-Enough LLM Obfuscation (GELO)
Computer Science > Cryptography and Security arXiv:2603.05035 (cs) [Submitted on 5 Mar 2026] Title:Good-Enough LLM Obfuscation (GELO) Authors:Anatoly Belikov, Ilya Fedotov View a PDF of the paper titled Good-Enough LLM Obfuscation (GELO), by Anatoly Belikov and 1 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are increasingly served on shared accelerators where an adversary with read access to device memory can observe KV caches and hidden states, threatening prompt privacy for open-source models. Cryptographic protections such as MPC and FHE offer strong guarantees but remain one to two orders of magnitude too slow for interactive inference, while static obfuscation schemes break under multi-run statistical attacks once the model is known. We present GELO (Good-Enough LLM Obfuscation), a lightweight protocol for privacy-preserving inference that limits information leakage from untrusted accelerator observations by hiding hidden states with fresh, per-batch invertible mixing. For each offloaded projection, the TEE samples a random matrix A, forms $U = AH$, offloads U and weights W to the accelerator, and then applies $A^-1$ on return, so that $A^-1 ((AH)W ) = HW$ and outputs are unchanged. Because mixing is never reused across batches, the attacker faces only a single-batch blind source separation problem. We analyze information leakage and introduce two practical defenses: (i) non-orthogonal mixing to mask Gram matrices, and (ii) orthogon...