[2605.01913] RefusalGuard: Geometry-Preserving Fine-Tuning for Safety in LLMs

[2605.01913] RefusalGuard: Geometry-Preserving Fine-Tuning for Safety in LLMs

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2605.01913: RefusalGuard: Geometry-Preserving Fine-Tuning for Safety in LLMs

Computer Science > Machine Learning arXiv:2605.01913 (cs) [Submitted on 3 May 2026] Title:RefusalGuard: Geometry-Preserving Fine-Tuning for Safety in LLMs Authors:Sadia Asif, Mohammad Mohammadi Amiri View a PDF of the paper titled RefusalGuard: Geometry-Preserving Fine-Tuning for Safety in LLMs, by Sadia Asif and 1 other authors View PDF HTML (experimental) Abstract:Fine-tuning safety-aligned language models for downstream tasks often leads to substantial degradation of refusal behavior, making models vulnerable to adversarial misuse. While prior work has shown that safety-relevant features are encoded in structured representations within the model's activation space, how these representations change during fine-tuning and why alignment degrades remains poorly understood. In this work, we investigate the representation-level mechanisms underlying alignment degradation. Our analysis shows that standard fine-tuning induces systematic drift in safety-relevant representations, distorts their geometric structure, and introduces interference between task optimization and safety features. These effects collectively lead to increased harmful compliance. Motivated by these findings, we introduce REFUSALGUARD, a representation-level fine-tuning framework that preserves safety-relevant structure during model adaptation. Our approach constrains updates in hidden representation space, ensuring that safety-mediating components remain stable while allowing task-specific learning in compl...

Originally published on May 05, 2026. Curated by AI News.

Related Articles

Researchers asked ChatGPT, Gemini and Claude which jobs are most exposed to AI. The chatbots wildly diagree
Llms

Researchers asked ChatGPT, Gemini and Claude which jobs are most exposed to AI. The chatbots wildly diagree

A study reveals that AI models disagree on which jobs are most vulnerable to automation, highlighting the unreliability of AI-generated e...

AI Tools & Products · 4 min ·
I stopped treating ChatGPT like Google — and everything suddenly clicked
Llms

I stopped treating ChatGPT like Google — and everything suddenly clicked

I stopped using ChatGPT like Google and started treating it like a thinking partner — here’s why that simple shift made the AI dramatical...

AI Tools & Products · 8 min ·
Hackers abuse Google ads, Claude.ai chats to push Mac malware
Llms

Hackers abuse Google ads, Claude.ai chats to push Mac malware

AI Tools & Products · 6 min ·
Llms

Does Claude dream of electric gavels? A federal case with Kansas connections sets an AI precedent.

AI Tools & Products ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime