[2503.23434] Towards Trustworthy GUI Agents: A Survey
Summary
This survey explores the challenges of building trustworthy GUI agents, highlighting the execution gap and proposing a taxonomy for understanding trust in these systems.
Why It Matters
As GUI agents increasingly perform critical tasks in digital environments, ensuring their trustworthiness is essential to prevent errors and security breaches. This survey provides a framework for evaluating and improving the reliability of these agents, making it relevant for researchers and practitioners in AI safety and human-computer interaction.
Key Takeaways
- Trustworthiness in GUI agents is crucial due to their irreversible actions.
- The execution gap highlights misalignments in perception, reasoning, and interaction.
- A new taxonomy categorizes trust into Perception Trust, Reasoning Trust, and Interaction Trust.
- Evaluation practices must go beyond task completion to assess trust effectively.
- Emerging metrics and benchmarks are needed to capture error cascades in GUI agents.
Computer Science > Machine Learning arXiv:2503.23434 (cs) [Submitted on 30 Mar 2025 (v1), last revised 24 Feb 2026 (this version, v2)] Title:Towards Trustworthy GUI Agents: A Survey Authors:Yucheng Shi, Wenhao Yu, Jingyuan Huang, Wenlin Yao, Wenhu Chen, Ninghao Liu View a PDF of the paper titled Towards Trustworthy GUI Agents: A Survey, by Yucheng Shi and 5 other authors View PDF HTML (experimental) Abstract:Graphical User Interface (GUI) agents extend large language models from text generation to action execution in real-world digital environments. Unlike conversational systems, GUI agents perform irreversible operations such as submitting forms, granting permissions, or deleting data, making trustworthiness a core requirement. This survey identifies the execution gap as a key challenge in building trustworthy GUI agents: the misalignment between perception, reasoning, and interaction in dynamic, partially observable interfaces. We introduce a workflow-aligned taxonomy that decomposes trust into Perception Trust, Reasoning Trust, and Interaction Trust, showing how failures propagate across agent pipelines and compound through action/observation loops. We systematically review benign failure modes and adversarial attacks at each stage, together with corresponding defense mechanisms tailored to GUI settings. We further analyze evaluation practices and argue that task completion alone is insufficient for trust assessment. We highlight emerging trust-aware metrics and benchma...