[2603.19515] ItinBench: Benchmarking Planning Across Multiple Cognitive Dimensions with Large Language Models
About this article
Abstract page for arXiv paper 2603.19515: ItinBench: Benchmarking Planning Across Multiple Cognitive Dimensions with Large Language Models
Computer Science > Artificial Intelligence arXiv:2603.19515 (cs) [Submitted on 19 Mar 2026] Title:ItinBench: Benchmarking Planning Across Multiple Cognitive Dimensions with Large Language Models Authors:Tianlong Wang, Pinqiao Wang, Weili Shi, Sheng li View a PDF of the paper titled ItinBench: Benchmarking Planning Across Multiple Cognitive Dimensions with Large Language Models, by Tianlong Wang and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) with advanced cognitive capabilities are emerging as agents for various reasoning and planning tasks. Traditional evaluations often focus on specific reasoning or planning questions within controlled environments. Recent studies have explored travel planning as a medium to integrate various verbal reasoning tasks into real-world contexts. However, reasoning tasks extend beyond verbal reasoning alone, and a comprehensive evaluation of LLMs requires a testbed that incorporates tasks from multiple cognitive domains. To address this gap, we introduce ItinBench, a benchmark that features one task of spatial reasoning, i.e., route optimization, into trip itinerary planning while keeping the traditional verbal reasoning tasks. ItinBench evaluates various LLMs across diverse tasks simultaneously, including Llama 3.1 8B, Mistral Large, Gemini 1.5 Pro, and GPT family. Our findings reveal that LLMs struggle to maintain high and consistent performance when concurrently handling multiple cognitive dimensions. ...