[2602.16997] Exploring LLMs for User Story Extraction from Mockups

[2602.16997] Exploring LLMs for User Story Extraction from Mockups

arXiv - AI 3 min read Article

Summary

This article explores the use of large language models (LLMs) for extracting user stories from high-fidelity mockups, enhancing requirements engineering in software development.

Why It Matters

As software development increasingly relies on user-centered design, automating user story extraction can streamline communication between stakeholders and improve the accuracy of functional requirements. This research highlights the potential of LLMs to enhance this process, making it more efficient and effective.

Key Takeaways

  • LLMs can automate the generation of user stories from mockups.
  • Incorporating a glossary significantly improves the accuracy of extracted user stories.
  • This approach enhances communication between users and developers.
  • The study presents a case study validating the effectiveness of LLMs in requirements engineering.
  • The findings contribute to the integration of AI in software development practices.

Computer Science > Software Engineering arXiv:2602.16997 (cs) [Submitted on 19 Feb 2026] Title:Exploring LLMs for User Story Extraction from Mockups Authors:Diego Firmenich, Leandro Antonelli, Bruno Pazos, Fabricio Lozada, Leonardo Morales View a PDF of the paper titled Exploring LLMs for User Story Extraction from Mockups, by Diego Firmenich and 4 other authors View PDF HTML (experimental) Abstract:User stories are one of the most widely used artifacts in the software industry to define functional requirements. In parallel, the use of high-fidelity mockups facilitates end-user participation in defining their needs. In this work, we explore how combining these techniques with large language models (LLMs) enables agile and automated generation of user stories from mockups. To this end, we present a case study that analyzes the ability of LLMs to extract user stories from high-fidelity mockups, both with and without the inclusion of a glossary of the Language Extended Lexicon (LEL) in the prompts. Our results demonstrate that incorporating the LEL significantly enhances the accuracy and suitability of the generated user stories. This approach represents a step forward in the integration of AI into requirements engineering, with the potential to improve communication between users and developers. Comments: Subjects: Software Engineering (cs.SE); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) ACM classes: D.2.1; I.2.7; D.2.2 Cite as: arXiv:2602.16997 [cs.SE]...

Related Articles

Google Maps can now write captions for your photos using AI | TechCrunch
Llms

Google Maps can now write captions for your photos using AI | TechCrunch

Gemini can now create captions when users are looking to share a photo or video.

TechCrunch - AI · 4 min ·
Llms

ParetoBandit: Budget-Paced Adaptive Routing for Non-Stationary LLM Serving

submitted by /u/PatienceHistorical70 [link] [comments]

Reddit - Machine Learning · 1 min ·
Llms

Stop Overcomplicating AI Workflows. This Is the Simple Framework

I’ve been working on building an agentic AI workflow system for business use cases and one thing became very clear very quickly. This is ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Lemonade 10.1 released for latest improvements for local LLMs on AMD GPUs & NPUs

submitted by /u/Fcking_Chuck [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime