[2505.11963] MARVEL: Multi-Agent RTL Vulnerability Extraction using Large Language Models

[2505.11963] MARVEL: Multi-Agent RTL Vulnerability Extraction using Large Language Models

arXiv - AI 4 min read Article

Summary

The paper presents MARVEL, a multi-agent framework utilizing Large Language Models for extracting vulnerabilities in RTL hardware designs, enhancing security verification processes.

Why It Matters

As hardware security becomes increasingly critical, MARVEL offers a novel approach to vulnerability detection, potentially streamlining the verification process and improving the reliability of system-on-chip designs. This research is relevant for engineers and researchers focused on hardware security and AI applications in design verification.

Key Takeaways

  • MARVEL employs a multi-agent system to enhance RTL vulnerability extraction.
  • The framework integrates various tools for comprehensive security analysis.
  • Testing on known buggy SoCs demonstrated the effectiveness of MARVEL in identifying real vulnerabilities.

Computer Science > Cryptography and Security arXiv:2505.11963 (cs) [Submitted on 17 May 2025 (v1), last revised 23 Feb 2026 (this version, v3)] Title:MARVEL: Multi-Agent RTL Vulnerability Extraction using Large Language Models Authors:Luca Collini, Baleegh Ahmad, Joey Ah-kiow, Ramesh Karri View a PDF of the paper titled MARVEL: Multi-Agent RTL Vulnerability Extraction using Large Language Models, by Luca Collini and 3 other authors View PDF HTML (experimental) Abstract:Hardware security verification is a challenging and time-consuming task. Design engineers may use formal verification, linting, and functional simulation tests, coupled with analysis and a deep understanding of the hardware design being inspected. Large Language Models (LLMs) have been used to assist during this task, either directly or in conjunction with existing tools. We improve the state of the art by proposing MARVEL, a multi-agent LLM framework for a unified approach to decision-making, tool use, and reasoning. MARVEL mimics the cognitive process of a designer looking for security vulnerabilities in RTL code. It consists of a supervisor agent that devises the security policy of the system-on-chips (SoCs) using its security documentation. It delegates tasks to validate the security policy to individual executor agents. Each executor agent carries out its assigned task using a particular strategy. Each executor agent may use one or more tools to identify potential security bugs in the design and send th...

Related Articles

Llms

A robot car with a Claude AI brain started a YouTube vlog about its own existence

Not a demo reel. Not a tutorial. A robot narrating its own experience — debugging, falling off shelves, questioning its identity. First-p...

Reddit - Artificial Intelligence · 1 min ·
Llms

Study: LLMs Able to De-Anonymize User Accounts on Reddit, Hacker News & Other "Pseudonymous" Platforms; Report Co-Author Expands, Advises

Advice from the study's co-author: "Be aware that it’s not any single post that identifies you, but the combination of small details acro...

Reddit - Artificial Intelligence · 1 min ·
Llms

do you guys actually trust AI tools with your data?

idk if it’s just me but lately i’ve been thinking about how casually we use stuff like chatgpt and claude for everything like coding, ran...

Reddit - Artificial Intelligence · 1 min ·
Llms

[P] Remote sensing foundation models made easy to use.

This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://git...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime