[2503.23434] Towards Trustworthy GUI Agents: A Survey

[2503.23434] Towards Trustworthy GUI Agents: A Survey

arXiv - Machine Learning 3 min read Article

Summary

This survey explores the challenges of building trustworthy GUI agents, highlighting the execution gap and proposing a taxonomy for understanding trust in these systems.

Why It Matters

As GUI agents increasingly perform critical tasks in digital environments, ensuring their trustworthiness is essential to prevent errors and security breaches. This survey provides a framework for evaluating and improving the reliability of these agents, making it relevant for researchers and practitioners in AI safety and human-computer interaction.

Key Takeaways

  • Trustworthiness in GUI agents is crucial due to their irreversible actions.
  • The execution gap highlights misalignments in perception, reasoning, and interaction.
  • A new taxonomy categorizes trust into Perception Trust, Reasoning Trust, and Interaction Trust.
  • Evaluation practices must go beyond task completion to assess trust effectively.
  • Emerging metrics and benchmarks are needed to capture error cascades in GUI agents.

Computer Science > Machine Learning arXiv:2503.23434 (cs) [Submitted on 30 Mar 2025 (v1), last revised 24 Feb 2026 (this version, v2)] Title:Towards Trustworthy GUI Agents: A Survey Authors:Yucheng Shi, Wenhao Yu, Jingyuan Huang, Wenlin Yao, Wenhu Chen, Ninghao Liu View a PDF of the paper titled Towards Trustworthy GUI Agents: A Survey, by Yucheng Shi and 5 other authors View PDF HTML (experimental) Abstract:Graphical User Interface (GUI) agents extend large language models from text generation to action execution in real-world digital environments. Unlike conversational systems, GUI agents perform irreversible operations such as submitting forms, granting permissions, or deleting data, making trustworthiness a core requirement. This survey identifies the execution gap as a key challenge in building trustworthy GUI agents: the misalignment between perception, reasoning, and interaction in dynamic, partially observable interfaces. We introduce a workflow-aligned taxonomy that decomposes trust into Perception Trust, Reasoning Trust, and Interaction Trust, showing how failures propagate across agent pipelines and compound through action/observation loops. We systematically review benign failure modes and adversarial attacks at each stage, together with corresponding defense mechanisms tailored to GUI settings. We further analyze evaluation practices and argue that task completion alone is insufficient for trust assessment. We highlight emerging trust-aware metrics and benchma...

Related Articles

Llms

I stopped using Claude like a chatbot — 7 prompt shifts that reclaimed 10 hours of my week

submitted by /u/ThereWas [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

What features do you actually want in an AI chatbot that nobody has built yet?

Hey everyone 👋 I'm building a new AI chat app and before I build anything I want to hear from real users first. Current AI tools like Cha...

Reddit - Artificial Intelligence · 1 min ·
Llms

So, what exactly is going on with the Claude usage limits?

I'm extremely new to AI and am building a local agent for fun. I purchased a Claude Pro account because it helped me a lot in the past wh...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why the Reddit Hate of AI?

I just went through a project where a builder wanted to build a really large building on a small lot next door. The project needed 6 vari...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime