[2405.06535] Controllable Image Generation with Composed Parallel Token Prediction
About this article
Abstract page for arXiv paper 2405.06535: Controllable Image Generation with Composed Parallel Token Prediction
Computer Science > Computer Vision and Pattern Recognition arXiv:2405.06535 (cs) [Submitted on 10 May 2024 (v1), last revised 7 Apr 2026 (this version, v2)] Title:Controllable Image Generation with Composed Parallel Token Prediction Authors:Jamie Stirling, Noura Al-Moubayed, Chris G. Willcocks, Hubert P. H. Shum View a PDF of the paper titled Controllable Image Generation with Composed Parallel Token Prediction, by Jamie Stirling and 3 other authors View PDF HTML (experimental) Abstract:Conditional discrete generative models struggle to faithfully compose multiple input conditions. To address this, we derive a theoretically-grounded formulation for composing discrete probabilistic generative processes, with masked generation (absorbing diffusion) as a special case. Our formulation enables precise specification of novel combinations and numbers of input conditions that lie outside the training data, with concept weighting enabling emphasis or negation of individual conditions. In synergy with the richly compositional learned vocabulary of VQ-VAE and VQ-GAN, our method attains a $63.4\%$ relative reduction in error rate compared to the previous state-of-the-art, averaged across 3 datasets (positional CLEVR, relational CLEVR and FFHQ), simultaneously obtaining an average absolute FID improvement of $-9.58$. Meanwhile, our method offers a $2.3\times$ to $12\times$ real-time speed-up over comparable methods, and is readily applied to an open pre-trained discrete text-to-image m...