[2603.29199] AEC-Bench: A Multimodal Benchmark for Agentic Systems in Architecture, Engineering, and Construction
About this article
Abstract page for arXiv paper 2603.29199: AEC-Bench: A Multimodal Benchmark for Agentic Systems in Architecture, Engineering, and Construction
Computer Science > Artificial Intelligence arXiv:2603.29199 (cs) [Submitted on 31 Mar 2026] Title:AEC-Bench: A Multimodal Benchmark for Agentic Systems in Architecture, Engineering, and Construction Authors:Harsh Mankodiya, Chase Gallik, Theodoros Galanos, Andriy Mulyar View a PDF of the paper titled AEC-Bench: A Multimodal Benchmark for Agentic Systems in Architecture, Engineering, and Construction, by Harsh Mankodiya and 3 other authors View PDF HTML (experimental) Abstract:The AEC-Bench is a multimodal benchmark for evaluating agentic systems on real-world tasks in the Architecture, Engineering, and Construction (AEC) domain. The benchmark covers tasks requiring drawing understanding, cross-sheet reasoning, and construction project-level coordination. This report describes the benchmark motivation, dataset taxonomy, evaluation protocol, and baseline results across several domain-specific foundation model harnesses. We use AEC-Bench to identify consistent tools and harness design techniques that uniformly improve performance across foundation models in their own base harnesses, such as Claude Code and Codex. We openly release our benchmark dataset, agent harness, and evaluation code for full replicability at this https URL under an Apache 2 license. Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2603.29199 [cs.AI] (or arXiv:2603.29199v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2603.29199 Focus to learn more arXiv-issued DOI via DataCite (pen...