[2602.16977] Fail-Closed Alignment for Large Language Models
Summary
This paper proposes a fail-closed alignment mechanism for large language models (LLMs) to enhance their safety and robustness against prompt-based jailbreaks, addressing a critical weakness in current refusal mechanisms.
Why It Matters
As LLMs become increasingly integrated into various applications, ensuring their safe and reliable operation is paramount. This research introduces a novel approach to model alignment that could significantly reduce risks associated with unsafe outputs, making it relevant for developers and researchers focused on AI safety.
Key Takeaways
- Current LLM refusal mechanisms are vulnerable to prompt-based attacks.
- The proposed fail-closed alignment ensures safety even during partial failures.
- A progressive alignment framework helps models learn multiple independent refusal pathways.
- The method demonstrates strong robustness with minimal computational overhead.
- Empirical analyses support the effectiveness of fail-closed alignment in enhancing LLM safety.
Computer Science > Machine Learning arXiv:2602.16977 (cs) [Submitted on 19 Feb 2026] Title:Fail-Closed Alignment for Large Language Models Authors:Zachary Coalson, Beth Sohler, Aiden Gabriel, Sanghyun Hong View a PDF of the paper titled Fail-Closed Alignment for Large Language Models, by Zachary Coalson and Beth Sohler and Aiden Gabriel and Sanghyun Hong View PDF Abstract:We identify a structural weakness in current large language model (LLM) alignment: modern refusal mechanisms are fail-open. While existing approaches encode refusal behaviors across multiple latent features, suppressing a single dominant feature$-$via prompt-based jailbreaks$-$can cause alignment to collapse, leading to unsafe generation. Motivated by this, we propose fail-closed alignment as a design principle for robust LLM safety: refusal mechanisms should remain effective even under partial failures via redundant, independent causal pathways. We present a concrete instantiation of this principle: a progressive alignment framework that iteratively identifies and ablates previously learned refusal directions, forcing the model to reconstruct safety along new, independent subspaces. Across four jailbreak attacks, we achieve the strongest overall robustness while mitigating over-refusal and preserving generation quality, with small computational overhead. Our mechanistic analyses confirm that models trained with our method encode multiple, causally independent refusal directions that prompt-based jailbrea...