[2603.26285] PhysVid: Physics Aware Local Conditioning for Generative Video Models
About this article
Abstract page for arXiv paper 2603.26285: PhysVid: Physics Aware Local Conditioning for Generative Video Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.26285 (cs) [Submitted on 27 Mar 2026] Title:PhysVid: Physics Aware Local Conditioning for Generative Video Models Authors:Saurabh, Pathak, Elahe Arani, Mykola Pechenizkiy, Bahram Zonooz View a PDF of the paper titled PhysVid: Physics Aware Local Conditioning for Generative Video Models, by Saurabh and 4 other authors View PDF HTML (experimental) Abstract:Generative video models achieve high visual fidelity but often violate basic physical principles, limiting reliability in real-world settings. Prior attempts to inject physics rely on conditioning: frame-level signals are domain-specific and short-horizon, while global text prompts are coarse and noisy, missing fine-grained dynamics. We present PhysVid, a physics-aware local conditioning scheme that operates over temporally contiguous chunks of frames. Each chunk is annotated with physics-grounded descriptions of states, interactions, and constraints, which are fused with the global prompt via chunk-aware cross-attention during training. At inference, we introduce negative physics prompts (descriptions of locally relevant law violations) to steer generation away from implausible trajectories. On VideoPhy, PhysVid improves physical commonsense scores by $\approx 33\%$ over baseline video generators, and by up to $\approx 8\%$ on VideoPhy2. These results show that local, physics-aware guidance substantially increases physical plausibility in generative vid...