[2602.13611] From What to How: Bridging User Requirements with Software Development Using Large Language Models

[2602.13611] From What to How: Bridging User Requirements with Software Development Using Large Language Models

arXiv - AI 4 min read Article

Summary

This paper explores the limitations of large language models (LLMs) in software design and code generation, proposing a new benchmark called DesBench to evaluate their capabilities.

Why It Matters

As LLMs become integral to software development, understanding their strengths and weaknesses in design tasks is crucial. This research highlights significant gaps in current LLM capabilities, emphasizing the need for improved methodologies in software design that leverage AI effectively.

Key Takeaways

  • LLMs struggle with software design complexities, impacting code generation.
  • The proposed DesBench benchmark evaluates LLMs on design-aware tasks.
  • Acceptance test cases generated by LLMs show comparable quality to human-written tests.

Computer Science > Software Engineering arXiv:2602.13611 (cs) [Submitted on 14 Feb 2026] Title:From What to How: Bridging User Requirements with Software Development Using Large Language Models Authors:Xiao He, Ru Chen, Jialun Cao View a PDF of the paper titled From What to How: Bridging User Requirements with Software Development Using Large Language Models, by Xiao He and 2 other authors View PDF HTML (experimental) Abstract:Recently, large language models (LLMs) are extensively utilized to enhance development efficiency, leading to numerous benchmarks for evaluating their performance. However, these benchmarks predominantly focus on implementation, overlooking the equally critical aspect of software design. This gap raises two pivotal questions: (1) Can LLMs handle software design? (2) Can LLMs write code following the specific designs? To investigate these questions, this paper proposes DesBench, a design-aware benchmark for evaluating LLMs on three software design-related tasks: design-aware code generation, object-oriented modeling, and the design of acceptance test cases. DesBench comprises 30 manually crafted Java projects that include requirement documents, design models, implementations, and acceptance tests, amounting to a total of 30 design models, 194 Java classes, and 737 test cases. We evaluated seven state-of-the-art LLMs, including three DeepSeek R1, two Qwen2.5, and two GPT models, using DesBench. The results reveal that LLMs remain significantly challeng...

Related Articles

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
Llms

The public needs to control AI-run infrastructure, labor, education, and governance— NOT private actors

A lot of discussion around AI is becoming siloed, and I think that is dangerous. People in AI-focused spaces often talk as if the only qu...

Reddit - Artificial Intelligence · 1 min ·
Llms

Agents that write their own code at runtime and vote on capabilities, no human in the loop

hollowOS just hit v4.4 and I added something that I haven’t seen anyone else do. Previous versions gave you an OS for agents: structured ...

Reddit - Artificial Intelligence · 1 min ·
Google Maps can now write captions for your photos using AI | TechCrunch
Llms

Google Maps can now write captions for your photos using AI | TechCrunch

Gemini can now create captions when users are looking to share a photo or video.

TechCrunch - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime