[2602.15867] Playing With AI: How Do State-Of-The-Art Large Language Models Perform in the 1977 Text-Based Adventure Game Zork?
Summary
This paper evaluates the performance of state-of-the-art Large Language Models (LLMs) in the 1977 text-based adventure game Zork, revealing significant limitations in their problem-solving and reasoning abilities.
Why It Matters
Understanding how LLMs perform in structured environments like Zork sheds light on their reasoning capabilities and limitations. This research is crucial for developers and researchers in AI, as it highlights the challenges LLMs face in tasks requiring metacognition and adaptive problem-solving.
Key Takeaways
- LLMs achieved less than 10% completion on average in Zork.
- Detailed instructions did not improve performance significantly.
- Models displayed fundamental limitations in metacognitive abilities.
- Inconsistent strategies and failure to learn from past attempts were observed.
- The study raises questions about the reasoning capabilities of current LLMs.
Computer Science > Computation and Language arXiv:2602.15867 (cs) [Submitted on 27 Jan 2026] Title:Playing With AI: How Do State-Of-The-Art Large Language Models Perform in the 1977 Text-Based Adventure Game Zork? Authors:Berry Gerrits View a PDF of the paper titled Playing With AI: How Do State-Of-The-Art Large Language Models Perform in the 1977 Text-Based Adventure Game Zork?, by Berry Gerrits View PDF HTML (experimental) Abstract:In this positioning paper, we evaluate the problem-solving and reasoning capabilities of contemporary Large Language Models (LLMs) through their performance in Zork, the seminal text-based adventure game first released in 1977. The game's dialogue-based structure provides a controlled environment for assessing how LLM-based chatbots interpret natural language descriptions and generate appropriate action sequences to succeed in the game. We test the performance of leading proprietary models - ChatGPT, Claude, and Gemini - under both minimal and detailed instructions, measuring game progress through achieved scores as the primary metric. Our results reveal that all tested models achieve less than 10% completion on average, with even the best-performing model (Claude Opus 4.5) reaching only approximately 75 out of 350 possible points. Notably, providing detailed game instructions offers no improvement, nor does enabling ''extended thinking''. Qualitative analysis of the models' reasoning processes reveals fundamental limitations: repeated unsucce...