[2507.23465] Role-Aware Language Models for Secure and Contextualized Access Control in Organizations
Summary
This article explores the development of role-aware language models designed to enhance access control in organizational settings, focusing on user-specific privileges and security measures.
Why It Matters
As large language models (LLMs) are increasingly integrated into enterprise environments, ensuring that these models respect user roles and access rights is crucial for maintaining security and preventing misuse. This research addresses a significant gap in existing safety measures by proposing methods to tailor model responses based on organizational roles.
Key Takeaways
- Role-aware language models can enhance security in organizations by tailoring responses based on user roles.
- The study evaluates three modeling strategies: BERT-based, LLM-based classifiers, and role-conditioned generation.
- Two datasets were created to assess model performance in realistic enterprise scenarios.
- The research highlights the importance of addressing prompt injection and role mismatch vulnerabilities.
- Findings suggest that fine-tuning LLMs for role-specific access can improve both security and contextual relevance.
Computer Science > Computation and Language arXiv:2507.23465 (cs) [Submitted on 31 Jul 2025 (v1), last revised 23 Feb 2026 (this version, v3)] Title:Role-Aware Language Models for Secure and Contextualized Access Control in Organizations Authors:Saeed Almheiri, Yerulan Kongrat, Adrian Santosh, Ruslan Tasmukhanov, Josemaria Loza Vera, Muhammad Dehan Al Kautsar, Fajri Koto View a PDF of the paper titled Role-Aware Language Models for Secure and Contextualized Access Control in Organizations, by Saeed Almheiri and 6 other authors View PDF HTML (experimental) Abstract:As large language models (LLMs) are increasingly deployed in enterprise settings, controlling model behavior based on user roles becomes an essential requirement. Existing safety methods typically assume uniform access and focus on preventing harmful or toxic outputs, without addressing role-specific access constraints. In this work, we investigate whether LLMs can be fine-tuned to generate responses that reflect the access privileges associated with different organizational roles. We explore three modeling strategies: a BERT-based classifier, an LLM-based classifier, and role-conditioned generation. To evaluate these approaches, we construct two complementary datasets. The first is adapted from existing instruction-tuning corpora through clustering and role labeling, while the second is synthetically generated to reflect realistic, role-sensitive enterprise scenarios. We assess model performance across varying o...