[2604.12168] Fully Homomorphic Encryption on Llama 3 model for privacy preserving LLM inference
About this article
Abstract page for arXiv paper 2604.12168: Fully Homomorphic Encryption on Llama 3 model for privacy preserving LLM inference
Computer Science > Cryptography and Security arXiv:2604.12168 (cs) [Submitted on 14 Apr 2026] Title:Fully Homomorphic Encryption on Llama 3 model for privacy preserving LLM inference Authors:Anes Abdennebi, Nadjia Kara, Laaziz Lahlou View a PDF of the paper titled Fully Homomorphic Encryption on Llama 3 model for privacy preserving LLM inference, by Anes Abdennebi and 2 other authors View PDF HTML (experimental) Abstract:The applications of Generative Artificial Intelligence (GenAI) and their intersections with data-driven fields, such as healthcare, finance, transportation, and information security, have led to significant improvements in service efficiency and low latency. However, this synergy raises serious concerns regarding the security of large language models (LLMs) and their potential impact on the privacy of companies and users' data. Many technology companies that incorporate LLMs in their services with a certain level of command and control bear a risk of data exposure and secret divulgence caused by insecure LLM pipelines, making them vulnerable to multiple attacks such as data poisoning, prompt injection, and model theft. Although several security techniques (input/output sanitization, decentralized learning, access control management, and encryption) were implemented to reduce this risk, there is still an imminent risk of quantum computing attacks, which are expected to break existing encryption algorithms, hence, retrieving secret keys, encrypted sensitive ...