[2603.20206] Enhancing Safety of Large Language Models via Embedding Space Separation
About this article
Abstract page for arXiv paper 2603.20206: Enhancing Safety of Large Language Models via Embedding Space Separation
Computer Science > Computation and Language arXiv:2603.20206 (cs) [Submitted on 1 Mar 2026] Title:Enhancing Safety of Large Language Models via Embedding Space Separation Authors:Xu Zhao, Xiting Wang, Weiran Shen View a PDF of the paper titled Enhancing Safety of Large Language Models via Embedding Space Separation, by Xu Zhao and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have achieved impressive capabilities, yet ensuring their safety against harmful prompts remains a critical challenge. Recent work has revealed that the latent representations (embeddings) of harmful and safe queries in LLMs typically exhibit linear separability, a property that has been exploited to construct attacks by perturbing the embeddings of harmful queries towards the safe subspace. Motivated by this observation, we propose a representation-level fine-tuning approach, named Embedding Space Separation (ES2), which improves LLM safety by explicitly enlarging the distance between harmful and safe representations in the embedding space. To prevent degradation of model's general capabilities, we introduce a Kullback-Leibler (KL) divergence regularization term into the loss function, which constrains the logits of the fine-tuned model to align with those of the original base model on harmless inputs. We evaluate our method on several open-source LLMs using standard safety benchmarks. Extensive experimental results demonstrate that our approach substantially impr...