[2603.02028] Latent attention on masked patches for flow reconstruction
About this article
Abstract page for arXiv paper 2603.02028: Latent attention on masked patches for flow reconstruction
Computer Science > Machine Learning arXiv:2603.02028 (cs) [Submitted on 2 Mar 2026] Title:Latent attention on masked patches for flow reconstruction Authors:Ben Eze, Luca Magri, Andrea Nóvoa View a PDF of the paper titled Latent attention on masked patches for flow reconstruction, by Ben Eze and 2 other authors View PDF HTML (experimental) Abstract:Vision transformers have demonstrated outstanding performance on image generation applications, but their adoption in scientific disciplines, like fluid dynamics, has been limited. We introduce the Latent Attention on Masked Patches (LAMP) model, an interpretable regression-based modified vision transformer designed for masked flow reconstruction. LAMP follows a three-fold strategy: (i) partition of each flow snapshot into patches, (ii) dimensionality reduction of each patch via patch-wise proper orthogonal decomposition, and (iii) reconstruction of the full field from a masked input using a single-layer transformer trained via closed-form linear regression. We test the method on two canonical 2D unsteady wakes: a wake past a bluff body, and a chaotic wake past a flat plate. We show that the LAMP accurately reconstructs the full flow field from a 90\%-masked and noisy input, across signal-to-noise ratios between 10 and 30\,dB. Incorporating nonlinear measurement states can reduce the prediction error by up to an order of magnitude. The learned attention matrix yields physically interpretable multi-fidelity optimal sensor-placeme...