[2509.16779] Improving User Interface Generation Models from Designer Feedback
Summary
This paper explores enhancing user interface (UI) generation models by incorporating designer feedback, demonstrating improved performance through designer-aligned interactions.
Why It Matters
As AI-generated UIs become increasingly prevalent, aligning these models with the workflows of designers is crucial for creating effective and user-friendly interfaces. This research addresses the gap in traditional feedback methods, offering insights that could transform how AI tools are developed for design.
Key Takeaways
- Traditional RLHF methods for UI generation often misalign with designer workflows.
- Incorporating designer feedback through familiar interactions improves model performance.
- The study involved 21 designers and resulted in 1500 design annotations for model training.
- Models fine-tuned with designer feedback outperformed traditional ranking-based models.
- This approach could lead to more intuitive and effective AI tools for UI design.
Computer Science > Human-Computer Interaction arXiv:2509.16779 (cs) [Submitted on 20 Sep 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Improving User Interface Generation Models from Designer Feedback Authors:Jason Wu, Amanda Swearngin, Arun Krishna Vajjala, Alan Leung, Jeffrey Nichols, Titus Barik View a PDF of the paper titled Improving User Interface Generation Models from Designer Feedback, by Jason Wu and Amanda Swearngin and Arun Krishna Vajjala and Alan Leung and Jeffrey Nichols and Titus Barik View PDF HTML (experimental) Abstract:Despite being trained on vast amounts of data, most LLMs are unable to reliably generate well-designed UIs. Designer feedback is essential to improving performance on UI generation; however, we find that existing RLHF methods based on ratings or rankings are not well-aligned with with designers' workflows and ignore the rich rationale used to critique and improve UI designs. In this paper, we investigate several approaches for designers to give feedback to UI generation models, using familiar interactions such as commenting, sketching and direct manipulation. We first perform an evaluation with 21 designers where they gave feedback using these interactions, which resulted in 1500 design annotations. We then use this data to finetune a series of LLMs to generate higher quality UIs. Finally, we evaluate these models with human judges, and we find that our designer-aligned approaches outperform models trained with traditional...