[2604.08575] MolPaQ: Modular Quantum-Classical Patch Learning for Interpretable Molecular Generation

[2604.08575] MolPaQ: Modular Quantum-Classical Patch Learning for Interpretable Molecular Generation

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.08575: MolPaQ: Modular Quantum-Classical Patch Learning for Interpretable Molecular Generation

Computer Science > Machine Learning arXiv:2604.08575 (cs) [Submitted on 27 Mar 2026] Title:MolPaQ: Modular Quantum-Classical Patch Learning for Interpretable Molecular Generation Authors:Syed Rameez Naqvi, Lu Peng View a PDF of the paper titled MolPaQ: Modular Quantum-Classical Patch Learning for Interpretable Molecular Generation, by Syed Rameez Naqvi and Lu Peng View PDF HTML (experimental) Abstract:Molecular generative models must jointly ensure validity, diversity, and property control, yet existing approaches typically trade off among these objectives. We present MOLPAQ, a modular quantum-classical generator that assembles molecules from quantum-generated latent patches. A \b{eta}-VAE pretrained on QM9 learns a chemically aligned latent manifold; a reduced conditioner maps molecular descriptors into this space; and a parameter-efficient quantum patch generator produces entangled node embeddings that a valence-aware aggregator reconstructs into valid molecular graphs. Adversarial fine-tuning with a latent critic and chemistry-shaped reward yields 100\% RDKit validity, 99.75\% novelty, and 0.905 diversity. Beyond aggregate metrics, the pretrained quantum generator, steered by the conditioner, improves mean QED by approx. 2.3\% and increases aromatic motif incidence by approx. 10-12\% relative to a parameter-matched classical generator, highlighting its role as a compact topology-shaping operator. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite a...

Originally published on April 13, 2026. Curated by AI News.

Related Articles

Machine Learning

How much can a video generated by the same diffusion model differ across GPU architectures if the initial noise latent is fixed? [D]

Hi! I am trying to sanity-check an assumption for diffusion video generation reproducibility. Suppose I run the same video diffusion mode...

Reddit - Machine Learning · 1 min ·
Llms

I am not an "anti" like this guy, but still an interesting video of person interacting with chat 4o

(Posting Here because removed by Chatgpt Complaints moderators because the model here is 4o, and refuse to believe there were any safety ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Unsolved AI Mystery Is Solved Along With Lessons Learned On Why ChatGPT Became Oddly Obsessed With Gremlins And Goblins

This article discusses the resolution of an AI mystery regarding ChatGPT's unusual focus on gremlins and goblins, along with insights gai...

AI Tools & Products · 1 min ·
[2602.06869] Uncovering Cross-Objective Interference in Multi-Objective Alignment
Llms

[2602.06869] Uncovering Cross-Objective Interference in Multi-Objective Alignment

Abstract page for arXiv paper 2602.06869: Uncovering Cross-Objective Interference in Multi-Objective Alignment

arXiv - Machine Learning · 3 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime