[2603.20697] Satellite-to-Street: Synthesizing Post-Disaster Views from Satellite Imagery via Generative Vision Models
About this article
Abstract page for arXiv paper 2603.20697: Satellite-to-Street: Synthesizing Post-Disaster Views from Satellite Imagery via Generative Vision Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.20697 (cs) [Submitted on 21 Mar 2026] Title:Satellite-to-Street: Synthesizing Post-Disaster Views from Satellite Imagery via Generative Vision Models Authors:Yifan Yang, Lei Zou, Wendy Jepson View a PDF of the paper titled Satellite-to-Street: Synthesizing Post-Disaster Views from Satellite Imagery via Generative Vision Models, by Yifan Yang and 2 other authors View PDF HTML (experimental) Abstract:In the immediate aftermath of natural disasters, rapid situational awareness is critical. Traditionally, satellite observations are widely used to estimate damage extent. However, they lack the ground-level perspective essential for characterizing specific structural failures and impacts. Meanwhile, ground-level data (e.g., street-view imagery) remains largely inaccessible during time-sensitive events. This study investigates Satellite-to-Street View Synthesis to bridge this data gap. We introduce two generative strategies to synthesize post-disaster street views from satellite imagery: a Vision-Language Model (VLM)-guided approach and a damage-sensitive Mixture-of-Experts (MoE) method. We benchmark these against general-purpose baselines (Pix2Pix, ControlNet) using a proposed Structure-Aware Evaluation Framework. This multi-tier protocol integrates (1) pixel-level quality assessment, (2) ResNet-based semantic consistency verification, and (3) a novel VLM-as-a-Judge for perceptual alignment. Experiments on 300...