[2602.10478] GPU-Fuzz: Finding Memory Errors in Deep Learning Frameworks

[2602.10478] GPU-Fuzz: Finding Memory Errors in Deep Learning Frameworks

arXiv - Machine Learning 3 min read Article

Summary

GPU-Fuzz introduces a novel approach to identifying memory errors in deep learning frameworks, demonstrating its effectiveness by uncovering 13 previously unknown bugs in popular libraries like PyTorch and TensorFlow.

Why It Matters

As deep learning frameworks become increasingly integral to AI applications, ensuring their reliability and security is critical. GPU-Fuzz addresses a significant gap in memory error detection, potentially enhancing the stability of AI systems and protecting against security vulnerabilities.

Key Takeaways

  • GPU-Fuzz models operator parameters as formal constraints to find memory errors.
  • The tool effectively identifies error-prone conditions in GPU kernels.
  • Application of GPU-Fuzz revealed 13 unknown bugs in major deep learning frameworks.

Computer Science > Cryptography and Security arXiv:2602.10478 (cs) [Submitted on 11 Feb 2026 (v1), last revised 13 Feb 2026 (this version, v2)] Title:GPU-Fuzz: Finding Memory Errors in Deep Learning Frameworks Authors:Zihao Li, Hongyi Lu, Yanan Guo, Zhenkai Zhang, Shuai Wang, Fengwei Zhang View a PDF of the paper titled GPU-Fuzz: Finding Memory Errors in Deep Learning Frameworks, by Zihao Li and 4 other authors View PDF HTML (experimental) Abstract:GPU memory errors are a critical threat to deep learning (DL) frameworks, leading to crashes or even security issues. We introduce GPU-Fuzz, a fuzzer locating these issues efficiently by modeling operator parameters as formal constraints. GPU-Fuzz utilizes a constraint solver to generate test cases that systematically probe error-prone boundary conditions in GPU kernels. Applied to PyTorch, TensorFlow, and PaddlePaddle, we uncovered 13 unknown bugs, demonstrating the effectiveness of GPU-Fuzz in finding memory errors. Subjects: Cryptography and Security (cs.CR); Machine Learning (cs.LG) Cite as: arXiv:2602.10478 [cs.CR]   (or arXiv:2602.10478v2 [cs.CR] for this version)   https://doi.org/10.48550/arXiv.2602.10478 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Zihao Li [view email] [v1] Wed, 11 Feb 2026 03:32:43 UTC (504 KB) [v2] Fri, 13 Feb 2026 03:04:02 UTC (504 KB) Full-text links: Access Paper: View a PDF of the paper titled GPU-Fuzz: Finding Memory Errors in Deep Learning Frameworks, by Zihao Li a...

Related Articles

Machine Learning

Finally Abliterated Sarvam 30B and 105B!

I abliterated Sarvam-30B and 105B - India's first multilingual MoE reasoning models - and found something interesting along the way! Reas...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.

Hi everyone, Just wanted to share a small but hard-won milestone. After a long plateau at 94.48%, we’ve pushed the official BANKING77-77 ...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Free tool I built to score dataset quality (LQS) — feedback welcome [D]

We built a Label Quality Score (LQS) system for our dataset marketplace and opened it up as a free standalone tool. Upload a dataset → ge...

Reddit - Machine Learning · 1 min ·
Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table | WIRED
Machine Learning

Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table | WIRED

Muse Spark is Meta’s first model since its AI reboot, and the benchmarks suggest formidable performance.

Wired - AI · 6 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime