[2411.13207] LISAA: A Framework for Large Language Model Information Security Awareness Assessment
About this article
Abstract page for arXiv paper 2411.13207: LISAA: A Framework for Large Language Model Information Security Awareness Assessment
Computer Science > Cryptography and Security arXiv:2411.13207 (cs) [Submitted on 20 Nov 2024 (v1), last revised 19 Mar 2026 (this version, v3)] Title:LISAA: A Framework for Large Language Model Information Security Awareness Assessment Authors:Ofir Cohen, Gil Ari Agmon, Asaf Shabtai, Rami Puzis View a PDF of the paper titled LISAA: A Framework for Large Language Model Information Security Awareness Assessment, by Ofir Cohen and 3 other authors View PDF HTML (experimental) Abstract:The popularity of large language models (LLMs) continues to grow, and LLM-based assistants have become ubiquitous. Information security awareness (ISA) is an important yet underexplored area of LLM safety. ISA encompasses LLMs' security knowledge, which has been explored in the past, as well as their attitudes and behaviors, which are crucial to LLMs' ability to understand implicit security context and reject unsafe requests that may cause an LLM to unintentionally fail the user. We introduce LISAA, a comprehensive framework to assess LLM ISA. The proposed framework applies an automated measurement method to a comprehensive set of 100 realistic scenarios covering all security topics in an ISA taxonomy. These scenarios create tension between implicit security implications and user satisfaction. Applying our LISAA framework to leading LLMs highlights a widespread vulnerability affecting current deployments: many popular models exhibit only medium to low ISA levels, exposing their users to cybersecu...