[2501.11782] Human-AI Collaborative Game Testing with Vision Language Models
About this article
Abstract page for arXiv paper 2501.11782: Human-AI Collaborative Game Testing with Vision Language Models
Computer Science > Human-Computer Interaction arXiv:2501.11782 (cs) [Submitted on 20 Jan 2025 (v1), last revised 4 Apr 2026 (this version, v2)] Title:Human-AI Collaborative Game Testing with Vision Language Models Authors:Boran Zhang, Muhan Xu, Zhijun Pan View a PDF of the paper titled Human-AI Collaborative Game Testing with Vision Language Models, by Boran Zhang and 2 other authors View PDF HTML (experimental) Abstract:As modern video games become increasingly complex, traditional manual testing methods are proving costly and inefficient, limiting the ability to ensure high-quality game experiences. While advancements in Artificial Intelligence (AI) offer the potential to assist human testers, the effectiveness of AI in truly enhancing real-world human performance remains underexplored. This study investigates how AI can improve game testing by developing and experimenting with an AI-assisted workflow that leverages state-of-the-art machine learning models for defect detection. Through an experiment involving 800 test cases and 276 participants of varying backgrounds, we evaluate the effectiveness of AI assistance under four conditions: with or without AI support, and with or without detailed knowledge of defects and design documentation. The results indicate that AI assistance significantly improves defect identification performance, particularly when paired with detailed knowledge. However, challenges arise when AI errors occur, negatively impacting human decision-maki...