[2604.02580] VoxelCodeBench: Benchmarking 3D World Modeling Through Code Generation
About this article
Abstract page for arXiv paper 2604.02580: VoxelCodeBench: Benchmarking 3D World Modeling Through Code Generation
Computer Science > Machine Learning arXiv:2604.02580 (cs) [Submitted on 2 Apr 2026] Title:VoxelCodeBench: Benchmarking 3D World Modeling Through Code Generation Authors:Yan Zheng, Florian Bordes View a PDF of the paper titled VoxelCodeBench: Benchmarking 3D World Modeling Through Code Generation, by Yan Zheng and 1 other authors View PDF HTML (experimental) Abstract:Evaluating code generation models for 3D spatial reasoning requires executing generated code in realistic environments and assessing outputs beyond surface-level correctness. We introduce a platform VoxelCode, for analyzing code generation capabilities for 3D understanding and environment creation. Our platform integrates natural language task specification, API-driven code execution in Unreal Engine, and a unified evaluation pipeline supporting both automated metrics and human assessment. To demonstrate its utility, we construct VoxelCodeBench, a benchmark of voxel manipulation tasks spanning three reasoning dimensions: symbolic interpretation, geometric construction, and artistic composition. Evaluating leading code generation models, we find that producing executable code is far easier than producing spatially correct outputs, with geometric construction and multi-object composition proving particularly challenging. By open-sourcing our platform and benchmark, we provide the community with extensible infrastructure for developing new 3D code generation benchmarks and probing spatial reasoning in future model...