[2603.23447] 3DCity-LLM: Empowering Multi-modality Large Language Models for 3D City-scale Perception and Understanding
About this article
Abstract page for arXiv paper 2603.23447: 3DCity-LLM: Empowering Multi-modality Large Language Models for 3D City-scale Perception and Understanding
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.23447 (cs) [Submitted on 24 Mar 2026] Title:3DCity-LLM: Empowering Multi-modality Large Language Models for 3D City-scale Perception and Understanding Authors:Yiping Chen, Jinpeng Li, Wenyu Ke, Yang Luo, Jie Ouyang, Zhongjie He, Li Liu, Hongchao Fan, Hao Wu View a PDF of the paper titled 3DCity-LLM: Empowering Multi-modality Large Language Models for 3D City-scale Perception and Understanding, by Yiping Chen and 7 other authors View PDF HTML (experimental) Abstract:While multi-modality large language models excel in object-centric or indoor scenarios, scaling them to 3D city-scale environments remains a formidable challenge. To bridge this gap, we propose 3DCity-LLM, a unified framework designed for 3D city-scale vision-language perception and understanding. 3DCity-LLM employs a coarse-to-fine feature encoding strategy comprising three parallel branches for target object, inter-object relationship, and global scene. To facilitate large-scale training, we introduce 3DCity-LLM-1.2M dataset that comprises approximately 1.2 million high-quality samples across seven representative task categories, ranging from fine-grained object analysis to multi-faceted scene planning. This strictly quality-controlled dataset integrates explicit 3D numerical information and diverse user-oriented simulations, enriching the question-answering diversity and realism of urban scenarios. Furthermore, we apply a multi-dimensional ...