[2502.03330] ControlGUI: Guiding Generative GUI Exploration through Perceptual Visual Flow
About this article
Abstract page for arXiv paper 2502.03330: ControlGUI: Guiding Generative GUI Exploration through Perceptual Visual Flow
Computer Science > Human-Computer Interaction arXiv:2502.03330 (cs) [Submitted on 5 Feb 2025 (v1), last revised 28 Mar 2026 (this version, v3)] Title:ControlGUI: Guiding Generative GUI Exploration through Perceptual Visual Flow Authors:Aryan Garg, Yue Jiang, Antti Oulasvirta View a PDF of the paper titled ControlGUI: Guiding Generative GUI Exploration through Perceptual Visual Flow, by Aryan Garg and 2 other authors View PDF Abstract:During the early stages of interface design, designers need to produce multiple sketches to explore a design space. Design tools often fail to support this critical stage, because they insist on specifying more details than necessary. Although recent advances in generative AI have raised hopes of solving this issue, in practice they fail because expressing loose ideas in a prompt is impractical. In this paper, we propose a diffusion-based approach to the low-effort generation of interface sketches. It breaks new ground by allowing flexible control of the generation process via three types of inputs: A) prompts, B) wireframes, and C) visual flows. The designer can provide any combination of these as input at any level of detail, and will get a diverse gallery of low-fidelity solutions in response. The unique benefit is that large design spaces can be explored rapidly with very little effort in input-specification. We present qualitative results for various combinations of input specifications. Additionally, we demonstrate that our model aligns ...