[2604.02574] Understanding the Effects of Safety Unalignment on Large Language Models
About this article
Abstract page for arXiv paper 2604.02574: Understanding the Effects of Safety Unalignment on Large Language Models
Computer Science > Cryptography and Security arXiv:2604.02574 (cs) [Submitted on 2 Apr 2026] Title:Understanding the Effects of Safety Unalignment on Large Language Models Authors:John T. Halloran View a PDF of the paper titled Understanding the Effects of Safety Unalignment on Large Language Models, by John T. Halloran View PDF HTML (experimental) Abstract:Safety alignment has become a critical step to ensure LLMs refuse harmful requests while providing helpful and harmless responses. However, despite the ubiquity of safety alignment for deployed frontier models, two separate lines of recent work--jailbreak-tuning (JT) and weight orthogonalization (WO)--have shown that safety guardrails may be largely disabled, resulting in LLMs which comply with harmful requests they would normally refuse. In spite of far-reaching safety implications, analysis has largely been limited to refusal rates of each unalignment method in isolation, leaving their relative effects on adversarial LLM capabilities unknown. To fill this gap, we study the impact of unaligning six popular LLMs of various sizes across a large number of malicious and benign tasks, using both JT and WO. Across the evaluated models, we show that while refusal degradation is split between the two methods, WO produces LLMs far more capable of aiding in malicious activity; in contrast to JT, the majority of WO unaligned models are far less prone to hallucinations, better retain their original natural-language performance, an...