[2602.22716] SoPE: Spherical Coordinate-Based Positional Embedding for Enhancing Spatial Perception of 3D LVLMs
Summary
The paper presents SoPE, a novel Spherical Coordinate-Based Positional Embedding method aimed at improving the spatial perception capabilities of 3D Large Vision-Language Models (3D LVLMs) by addressing limitations in existing positional encoding techniques.
Why It Matters
As 3D LVLMs become increasingly relevant in various applications, enhancing their spatial understanding is crucial. The SoPE method offers a solution to improve the modeling of spatial structures, which can lead to better performance in multimodal tasks involving 3D data.
Key Takeaways
- SoPE improves the encoding of 3D tokens by mapping them into a spherical coordinate space.
- The method preserves geometric structures and enhances spatial awareness in 3D LVLMs.
- Experimental results demonstrate SoPE's effectiveness across multiple 3D scene benchmarks.
- A multi-scale frequency mixing strategy is introduced to enhance feature fusion.
- Real-world deployment shows strong generalization capabilities of the SoPE method.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.22716 (cs) [Submitted on 26 Feb 2026] Title:SoPE: Spherical Coordinate-Based Positional Embedding for Enhancing Spatial Perception of 3D LVLMs Authors:Guanting Ye, Qiyan Zhao, Wenhao Yu, Liangyu Yuan, Mingkai Li, Xiaofeng Zhang, Jianmin Ji, Yanyong Zhang, Qing Jiang, Ka-Veng Yuen View a PDF of the paper titled SoPE: Spherical Coordinate-Based Positional Embedding for Enhancing Spatial Perception of 3D LVLMs, by Guanting Ye and 9 other authors View PDF HTML (experimental) Abstract:3D Large Vision-Language Models (3D LVLMs) built upon Large Language Models (LLMs) have achieved remarkable progress across various multimodal tasks. However, their inherited position-dependent modeling mechanism, Rotary Position Embedding (RoPE), remains suboptimal for 3D multimodal understanding. The vanilla RoPE formulation fails to preserve essential three-dimensional spatial structures when encoding 3D tokens, and its relative distance computation overlooks angular dependencies, hindering the model's ability to capture directional variations in visual representations. To overcome these limitations, we introduce Spherical Coordinate-based Positional Embedding (SoPE). Our method maps point-cloud token indices into a 3D spherical coordinate space, enabling unified modeling of spatial locations and directional angles. This formulation preserves the inherent geometric structure of point-cloud data, enhances spatial awareness, an...