[2602.16523] Reinforcement Learning for Parameterized Quantum State Preparation: A Comparative Study
Summary
This paper explores reinforcement learning techniques for parameterized quantum state preparation, comparing one-stage and two-stage training regimes using Proximal Policy Optimization and Advantage Actor-Critic methods.
Why It Matters
As quantum computing advances, efficient state preparation becomes crucial. This study provides insights into optimizing quantum circuits using reinforcement learning, which could enhance the scalability and effectiveness of quantum algorithms, impacting various applications in quantum information science.
Key Takeaways
- The study compares one-stage and two-stage reinforcement learning approaches for quantum state preparation.
- Proximal Policy Optimization (PPO) outperforms Advantage Actor-Critic (A2C) in learning effective policies.
- One-stage PPO is recommended for practical applications due to its efficiency and accuracy.
- Scalability issues arise when targeting larger qubit systems, particularly beyond ten qubits.
- Explicit synthesized circuits are provided to aid in understanding the implementation.
Computer Science > Machine Learning arXiv:2602.16523 (cs) [Submitted on 18 Feb 2026] Title:Reinforcement Learning for Parameterized Quantum State Preparation: A Comparative Study Authors:Gerhard Stenzel, Isabella Debelic, Michael Kölle, Tobias Rohe, Leo Sünkel, Julian Hager, Claudia Linnhoff-Popien View a PDF of the paper titled Reinforcement Learning for Parameterized Quantum State Preparation: A Comparative Study, by Gerhard Stenzel and Isabella Debelic and Michael K\"olle and Tobias Rohe and Leo S\"unkel and Julian Hager and Claudia Linnhoff-Popien View PDF HTML (experimental) Abstract:We extend directed quantum circuit synthesis (DQCS) with reinforcement learning from purely discrete gate selection to parameterized quantum state preparation with continuous single-qubit rotations \(R_x\), \(R_y\), and \(R_z\). We compare two training regimes: a one-stage agent that jointly selects the gate type, the affected qubit(s), and the rotation angle; and a two-stage variant that first proposes a discrete circuit and subsequently optimizes the rotation angles with Adam using parameter-shift gradients. Using Gymnasium and PennyLane, we evaluate Proximal Policy Optimization (PPO) and Advantage Actor--Critic (A2C) on systems comprising two to ten qubits and on targets of increasing complexity with \(\lambda\) ranging from one to five. Whereas A2C does not learn effective policies in this setting, PPO succeeds under stable hyperparameters (one-stage: learning rate approximately \(5\t...