[2602.11042] Characterizing Trainability of Instantaneous Quantum Polynomial Circuit Born Machines
Summary
This paper explores the trainability of Instantaneous Quantum Polynomial Circuit Born Machines (IQP-QCBMs), addressing the challenges posed by barren plateaus and providing insights into their potential quantum advantages.
Why It Matters
Understanding the trainability of IQP-QCBMs is crucial for advancing quantum generative models. This research addresses the barren plateau issue, which can hinder effective training, and identifies conditions under which these models can maintain trainability, thereby contributing to the broader field of quantum computing and machine learning.
Key Takeaways
- IQP-QCBMs can face trainability challenges due to barren plateaus.
- The choice of generator set and kernel spectrum significantly impacts gradient behavior.
- Low-weight-biased kernels can help avoid exponential gradient suppression.
- Small-variance Gaussian initialization can ensure polynomial scaling for gradients.
- Sparse IQP families can produce classically intractable distributions while remaining trainable.
Quantum Physics arXiv:2602.11042 (quant-ph) [Submitted on 11 Feb 2026 (v1), last revised 13 Feb 2026 (this version, v2)] Title:Characterizing Trainability of Instantaneous Quantum Polynomial Circuit Born Machines Authors:Kevin Shen, Susanne Pielawa, Vedran Dunjko, Hao Wang View a PDF of the paper titled Characterizing Trainability of Instantaneous Quantum Polynomial Circuit Born Machines, by Kevin Shen and 3 other authors View PDF HTML (experimental) Abstract:Instantaneous quantum polynomial quantum circuit Born machines (IQP-QCBMs) have been proposed as quantum generative models with a classically tractable training objective based on the maximum mean discrepancy (MMD) and a potential quantum advantage motivated by sampling-complexity arguments, making them an exciting model worth deeper investigation. While recent works have further proven the universality of a (slightly generalized) model, the next immediate question pertains to its trainability, i.e., whether it suffers from the exponentially vanishing loss gradients, known as the barren plateau issue, preventing effective use, and how regimes of trainability overlap with regimes of possible quantum advantage. Here, we provide significant strides in these directions. To study the trainability at initialization, we analytically derive closed-form expressions for the variances of the partial derivatives of the MMD loss function and provide general upper and lower bounds. With uniform initialization, we show that barren p...