[2411.19253] Quantum feedback control with a transformer neural network architecture

[2411.19253] Quantum feedback control with a transformer neural network architecture

arXiv - Machine Learning 4 min read Article

Summary

This article presents a novel approach to quantum feedback control using transformer neural networks, demonstrating their effectiveness in achieving high fidelity state stabilization and energy minimization in quantum systems.

Why It Matters

The integration of transformer architectures in quantum feedback control represents a significant advancement in both quantum physics and machine learning. This research could enhance the efficiency of quantum state management, which is crucial for the development of quantum computing and error correction techniques.

Key Takeaways

  • Transformers can effectively manage quantum feedback control, surpassing traditional methods.
  • The proposed architecture achieves near unit fidelity in state stabilization even under challenging conditions.
  • This approach can be applied to quantum error correction and real-time tuning of quantum devices.

Quantum Physics arXiv:2411.19253 (quant-ph) [Submitted on 28 Nov 2024 (v1), last revised 25 Feb 2026 (this version, v2)] Title:Quantum feedback control with a transformer neural network architecture Authors:Pranav Vaidhyanathan, Florian Marquardt, Mark T. Mitchison, Natalia Ares View a PDF of the paper titled Quantum feedback control with a transformer neural network architecture, by Pranav Vaidhyanathan and 3 other authors View PDF HTML (experimental) Abstract:Attention-based neural networks such as transformers have revolutionized various fields such as natural language processing, genomics, and vision. Here, we demonstrate the use of transformers for quantum feedback control through both a supervised and reinforcement learning approach. In particular, due to the transformer's ability to capture long-range temporal correlations and training efficiency, we show that it can surpass some of the limitations of previous control approaches, e.g.~those based on recurrent neural networks trained using a similar approach or policy based reinforcement learning. We numerically show, for the example of state stabilization of a two-level system, that our bespoke transformer architecture can achieve near unit fidelity to a target state in a short time even in the presence of inefficient measurement and Hamiltonian perturbations that were not included in the training set as well as the control of non-Markovian systems. We also demonstrate that our transformer can perform energy minimiz...

Related Articles

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch
Machine Learning

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch

Less than a year after launching, with checks from some of the biggest names in Silicon Valley, crowdsourced AI model feedback startup Yu...

TechCrunch - AI · 4 min ·
Machine Learning

[R] Fine-tuning services report

If you have some data and want to train or run a small custom model but don't have powerful enough hardware for training, fine-tuning ser...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Does ML have a "bible"/reference textbook at the Intermediate/Advanced level?

Hello, everyone! This is my first time posting here and I apologise if the question is, perhaps, a bit too basic for this sub-reddit. A b...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence

A week ago I made a thread asking whether ICML 2026’s review policy might have affected review outcomes, especially whether Policy A pape...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime