[2603.22869] Chain-of-Authorization: Internalizing Authorization into Large Language Models via Reasoning Trajectories
About this article
Abstract page for arXiv paper 2603.22869: Chain-of-Authorization: Internalizing Authorization into Large Language Models via Reasoning Trajectories
Computer Science > Artificial Intelligence arXiv:2603.22869 (cs) [Submitted on 24 Mar 2026] Title:Chain-of-Authorization: Internalizing Authorization into Large Language Models via Reasoning Trajectories Authors:Yang Li, Yule Liu, Xinlei He, Youjian Zhao, Qi Li, Ke Xu View a PDF of the paper titled Chain-of-Authorization: Internalizing Authorization into Large Language Models via Reasoning Trajectories, by Yang Li and 5 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have become core cognitive components in modern artificial intelligence (AI) systems, combining internal knowledge with external context to perform complex tasks. However, LLMs typically treat all accessible data indiscriminately, lacking inherent awareness of knowledge ownership and access boundaries. This deficiency heightens risks of sensitive data leakage and adversarial manipulation, potentially enabling unauthorized system access and severe security crises. Existing protection strategies rely on rigid, uniform defense that prevent dynamic authorization. Structural isolation methods faces scalability bottlenecks, while prompt guidance methods struggle with fine-grained permissions distinctions. Here, we propose the Chain-of-Authorization (CoA) framework, a secure training and reasoning paradigm that internalizes authorization logic into LLMs' core capabilities. Unlike passive external defneses, CoA restructures the model's information flow: it embeds permission context at ...