[2507.01110] A LoD of Gaussians: Unified Training and Rendering for Ultra-Large Scale Reconstruction with External Memory
Summary
The paper presents a novel framework, A LoD of Gaussians, for ultra-large-scale scene reconstruction and rendering using Gaussian splatting, enabling real-time performance on consumer-grade GPUs without scene partitioning.
Why It Matters
This research addresses significant limitations in existing rendering techniques that struggle with large-scale environments. By eliminating the need for scene partitioning, it enhances the quality and efficiency of rendering in complex scenarios, which is crucial for applications in virtual reality, urban planning, and gaming.
Key Takeaways
- Introduces a unified framework for training and rendering large-scale Gaussian scenes.
- Eliminates the need for scene partitioning, reducing artifacts and improving training across scales.
- Utilizes a hybrid data structure for efficient Level-of-Detail selection and rendering.
- Supports real-time streaming and visualization of complex scenes with high detail.
- Demonstrates potential applications in various fields, including VR and urban modeling.
Computer Science > Graphics arXiv:2507.01110 (cs) [Submitted on 1 Jul 2025 (v1), last revised 17 Feb 2026 (this version, v4)] Title:A LoD of Gaussians: Unified Training and Rendering for Ultra-Large Scale Reconstruction with External Memory Authors:Felix Windisch, Thomas Köhler, Lukas Radl, Mattia D'Urso, Michael Steiner, Dieter Schmalstieg, Markus Steinberger View a PDF of the paper titled A LoD of Gaussians: Unified Training and Rendering for Ultra-Large Scale Reconstruction with External Memory, by Felix Windisch and Thomas K\"ohler and Lukas Radl and Mattia D'Urso and Michael Steiner and Dieter Schmalstieg and Markus Steinberger View PDF HTML (experimental) Abstract:Gaussian Splatting has emerged as a high-performance technique for novel view synthesis, enabling real-time rendering and high-quality reconstruction of small scenes. However, scaling to larger environments has so far relied on partitioning the scene into chunks -- a strategy that introduces artifacts at chunk boundaries, complicates training across varying scales, and is poorly suited to unstructured scenarios such as city-scale flyovers combined with street-level views. Moreover, rendering remains fundamentally limited by GPU memory, as all visible chunks must reside in VRAM simultaneously. We introduce A LoD of Gaussians, a framework for training and rendering ultra-large-scale Gaussian scenes on a single consumer-grade GPU -- without partitioning. Our method stores the full scene out-of-core (e.g., in C...