[2601.23143] THINKSAFE: Self-Generated Safety Alignment for Reasoning Models
About this article
Abstract page for arXiv paper 2601.23143: THINKSAFE: Self-Generated Safety Alignment for Reasoning Models
Computer Science > Artificial Intelligence arXiv:2601.23143 (cs) [Submitted on 30 Jan 2026 (v1), last revised 8 May 2026 (this version, v2)] Title:THINKSAFE: Self-Generated Safety Alignment for Reasoning Models Authors:Seanie Lee, Sangwoo Park, Yumin Choi, Gyeongman Kim, Minki Kang, Jihun Yun, Dongmin Park, Jongho Park, Sung Ju Hwang View a PDF of the paper titled THINKSAFE: Self-Generated Safety Alignment for Reasoning Models, by Seanie Lee and 8 other authors View PDF HTML (experimental) Abstract:Large reasoning models (LRMs) achieve remarkable performance by leveraging reinforcement learning (RL) on reasoning tasks to generate long chain-of-thought (CoT) reasoning. However, this over-optimization often prioritizes compliance, making models vulnerable to harmful prompts. To mitigate this safety degradation, recent approaches rely on external teacher distillation, yet this introduces a distributional discrepancy that degrades native reasoning. We propose ThinkSafe, a self-generated alignment framework that restores safety alignment without external teachers. Our key insight is that while compliance suppresses safety mechanisms, models often retain latent knowledge to identify harm. ThinkSafe unlocks this via lightweight refusal steering, guiding the model to generate in-distribution safety reasoning traces. Fine-tuning on these self-generated responses effectively realigns the model while minimizing distribution shift. Experiments on DeepSeek-R1-Distill and Qwen3 show Thi...