[2603.21315] FluidWorld: Reaction-Diffusion Dynamics as a Predictive Substrate for World Models
About this article
Abstract page for arXiv paper 2603.21315: FluidWorld: Reaction-Diffusion Dynamics as a Predictive Substrate for World Models
Computer Science > Machine Learning arXiv:2603.21315 (cs) [Submitted on 22 Mar 2026] Title:FluidWorld: Reaction-Diffusion Dynamics as a Predictive Substrate for World Models Authors:Fabien Polly View a PDF of the paper titled FluidWorld: Reaction-Diffusion Dynamics as a Predictive Substrate for World Models, by Fabien Polly View PDF HTML (experimental) Abstract:World models learn to predict future states of an environment, enabling planning and mental simulation. Current approaches default to Transformer-based predictors operating in learned latent spaces. This comes at a cost: O(N^2) computation and no explicit spatial inductive bias. This paper asks a foundational question: is self-attention necessary for predictive world modeling, or can alternative computational substrates achieve comparable or superior results? I introduce FluidWorld, a proof-of-concept world model whose predictive dynamics are governed by partial differential equations (PDEs) of reaction-diffusion type. Instead of using a separate neural network predictor, the PDE integration itself produces the future state prediction. In a strictly parameter-matched three-way ablation on unconditional UCF-101 video prediction (64x64, ~800K parameters, identical encoder, decoder, losses, and data), FluidWorld is compared against both a Transformer baseline (self-attention) and a ConvLSTM baseline (convolutional recurrence). While all three models converge to comparable single-step prediction loss, FluidWorld achieve...