[2602.16977] Fail-Closed Alignment for Large Language Models

[2602.16977] Fail-Closed Alignment for Large Language Models

arXiv - Machine Learning 3 min read Article

Summary

This paper proposes a fail-closed alignment mechanism for large language models (LLMs) to enhance their safety and robustness against prompt-based jailbreaks, addressing a critical weakness in current refusal mechanisms.

Why It Matters

As LLMs become increasingly integrated into various applications, ensuring their safe and reliable operation is paramount. This research introduces a novel approach to model alignment that could significantly reduce risks associated with unsafe outputs, making it relevant for developers and researchers focused on AI safety.

Key Takeaways

  • Current LLM refusal mechanisms are vulnerable to prompt-based attacks.
  • The proposed fail-closed alignment ensures safety even during partial failures.
  • A progressive alignment framework helps models learn multiple independent refusal pathways.
  • The method demonstrates strong robustness with minimal computational overhead.
  • Empirical analyses support the effectiveness of fail-closed alignment in enhancing LLM safety.

Computer Science > Machine Learning arXiv:2602.16977 (cs) [Submitted on 19 Feb 2026] Title:Fail-Closed Alignment for Large Language Models Authors:Zachary Coalson, Beth Sohler, Aiden Gabriel, Sanghyun Hong View a PDF of the paper titled Fail-Closed Alignment for Large Language Models, by Zachary Coalson and Beth Sohler and Aiden Gabriel and Sanghyun Hong View PDF Abstract:We identify a structural weakness in current large language model (LLM) alignment: modern refusal mechanisms are fail-open. While existing approaches encode refusal behaviors across multiple latent features, suppressing a single dominant feature$-$via prompt-based jailbreaks$-$can cause alignment to collapse, leading to unsafe generation. Motivated by this, we propose fail-closed alignment as a design principle for robust LLM safety: refusal mechanisms should remain effective even under partial failures via redundant, independent causal pathways. We present a concrete instantiation of this principle: a progressive alignment framework that iteratively identifies and ablates previously learned refusal directions, forcing the model to reconstruct safety along new, independent subspaces. Across four jailbreak attacks, we achieve the strongest overall robustness while mitigating over-refusal and preserving generation quality, with small computational overhead. Our mechanistic analyses confirm that models trained with our method encode multiple, causally independent refusal directions that prompt-based jailbrea...

Related Articles

Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Llms

I have been coding for 11 years and I caught myself completely unable to debug a problem without AI assistance last month. That scared me more than anything I have seen in this industry.

I want to be honest about something that happened to me because I think it is more common than people admit. Last month I hit a bug in a ...

Reddit - Artificial Intelligence · 1 min ·
Llms

OpenClaw security checklist: practical safeguards for AI agents

Here is one of the better quality guides on the ensuring safety when deploying OpenClaw: https://chatgptguide.ai/openclaw-security-checkl...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime