[2603.21300] The Average Relative Entropy and Transpilation Depth determines the noise robustness in Variational Quantum Classifiers

[2603.21300] The Average Relative Entropy and Transpilation Depth determines the noise robustness in Variational Quantum Classifiers

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.21300: The Average Relative Entropy and Transpilation Depth determines the noise robustness in Variational Quantum Classifiers

Quantum Physics arXiv:2603.21300 (quant-ph) [Submitted on 22 Mar 2026] Title:The Average Relative Entropy and Transpilation Depth determines the noise robustness in Variational Quantum Classifiers Authors:Aakash Ravindra Shinde, Arianne Meijer - van de Griend, Jukka K. Nurminen View a PDF of the paper titled The Average Relative Entropy and Transpilation Depth determines the noise robustness in Variational Quantum Classifiers, by Aakash Ravindra Shinde and 2 other authors View PDF Abstract:Variational Quantum Algorithms (VQAs) have been extensively researched for applications in Quantum Machine Learning (QML), Optimization, and Molecular simulations. Although designed for Noisy Intermediate-Scale Quantum (NISQ) devices, VQAs are predominantly evaluated classically due to uncertain results on noisy devices and limited resource availability. Raising concern over the reproducibility of simulated VQAs on noisy hardware. While prior studies indicate that VQAs may exhibit noise resilience in specific parameterized shallow quantum circuits, there are no definitive measures to establish what defines a shallow circuit or the optimal circuit depth for VQAs on a noisy platform. These challenges extend naturally to Variational Quantum Classification (VQC) algorithms, a subclass of VQAs for supervised learning. In this article, we propose a relative entropy-based metric to verify whether a VQC model would perform similarly on a noisy device as it does on simulations. We establish a str...

Originally published on March 24, 2026. Curated by AI News.

Related Articles

Llms

[P] I built an autonomous ML agent that runs experiments on tabular data indefinitely - inspired by Karpathy's AutoResearch

Inspired by Andrej Karpathy's AutoResearch, I built a system where Claude Code acts as an autonomous ML researcher on tabular binary clas...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Data curation and targeted replacement as a pre-training alignment and controllability method

Hi, r/MachineLearning: has much research been done in large-scale training scenarios where undesirable data has been replaced before trai...

Reddit - Machine Learning · 1 min ·
Llms

[R] BraiNN: An Experimental Neural Architecture with Working Memory, Relational Reasoning, and Adaptive Learning

BraiNN An Experimental Neural Architecture with Working Memory, Relational Reasoning, and Adaptive Learning BraiNN is a compact research‑...

Reddit - Machine Learning · 1 min ·
Machine Learning

[HIRING]Remote AI Training Jobs -Up to $1K/Week| Collaborators Wanted.USA

submitted by /u/nortonakenga [link] [comments]

Reddit - ML Jobs · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime