[2603.28553] Multimodal Analytics of Cybersecurity Crisis Preparation Exercises: What Predicts Success?

[2603.28553] Multimodal Analytics of Cybersecurity Crisis Preparation Exercises: What Predicts Success?

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2603.28553: Multimodal Analytics of Cybersecurity Crisis Preparation Exercises: What Predicts Success?

Computer Science > Human-Computer Interaction arXiv:2603.28553 (cs) [Submitted on 30 Mar 2026] Title:Multimodal Analytics of Cybersecurity Crisis Preparation Exercises: What Predicts Success? Authors:Conrad Borchers, Valdemar Švábenský, Sandesh K. Kafle, Kevin K. Tang, Jan Vykopal View a PDF of the paper titled Multimodal Analytics of Cybersecurity Crisis Preparation Exercises: What Predicts Success?, by Conrad Borchers and 4 other authors View PDF HTML (experimental) Abstract:Instructional alignment, the match between intended cognition and enacted activity, is central to effective instruction but hard to operationalize at scale. We examine alignment in cybersecurity simulations using multimodal traces from 23 teams (76 students) across five exercise sessions. Study 1 codes objectives and team emails with Bloom's taxonomy and models the completion of key exercise tasks with generalized linear mixed models. Alignment, defined as the discrepancy between required and enacted Bloom levels, predicts success, whereas the Bloom category alone does not predict success once discrepancy is considered. Study 2 compares predictive feature families using grouped cross-validation and l1-regularized logistic regression. Text embeddings and log features outperform Bloom-only models (AUC~0.74 and 0.71 vs. 0.55), and their combination performs best (Test AUC~0.80), with Bloom frequencies adding little. Overall, the work offers a measure of alignment for simulations and shows that multimoda...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

How Dangerous Is Anthropic’s New AI Model? Its Chief Science Officer Explains.
Machine Learning

How Dangerous Is Anthropic’s New AI Model? Its Chief Science Officer Explains.

Anthropic says Mythos is so dangerous that the company is slowing its release. We asked Jared Kaplan why.

AI Tools & Products · 3 min ·
Llms

Built an political benchmark for LLMs. KIMI K2 can't answer about Taiwan (Obviously). GPT-5.3 refuses 100% of questions when given an opt-out. [P]

I spent the few days building a benchmark that maps where frontier LLMs fall on a 2D political compass (economic left/right + social prog...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime