[2510.01569] InvThink: Premortem Reasoning for Safer Language Models
About this article
Abstract page for arXiv paper 2510.01569: InvThink: Premortem Reasoning for Safer Language Models
Computer Science > Artificial Intelligence arXiv:2510.01569 (cs) [Submitted on 2 Oct 2025 (v1), last revised 8 May 2026 (this version, v3)] Title:InvThink: Premortem Reasoning for Safer Language Models Authors:Yubin Kim, Taehan Kim, Eugene Park, Chunjong Park, Cynthia Breazeal, Daniel McDuff, Hae Won Park View a PDF of the paper titled InvThink: Premortem Reasoning for Safer Language Models, by Yubin Kim and 6 other authors View PDF HTML (experimental) Abstract:We present InvThink, a training and prompting framework that requires the model to enumerate, analyze, and constrain potential failures before generating its final response. Unlike existing safety alignment methods that optimize only for safe final responses, InvThink structures generation into three steps: (1) enumerate potential harms, (2) analyze their consequences, (3) generate the response under explicit mitigation constraints. We observe three findings: (i) InvThink shows higher safety scores at larger model sizes, compared to existing safety prompting and alignment baselines. (ii) InvThink mitigates the safety tax. Models trained with INVTHINK preserve their reasoning capability on standard benchmarks. (iii) beyond general safety tasks, InvThink also reduces harmful behavior in professional ethics domains (medicine, finance, law) and in agentic misalignment scenarios, achieving up to 32% reduction in harmfulness over zero-shot baselines and 16% over SafetyPrompt. We extend InvThink with supervised fine-tuning...