[2602.20070] Training-Free Generative Modeling via Kernelized Stochastic Interpolants
Summary
This paper presents a novel kernel method for generative modeling that eliminates the need for training neural networks, utilizing linear systems instead.
Why It Matters
The approach offers a significant advancement in generative modeling by enabling training-free generation, which can streamline workflows in various applications such as financial modeling and image generation. This could democratize access to generative AI technologies, making them more efficient and accessible.
Key Takeaways
- Introduces a kernel method for generative modeling without neural network training.
- Utilizes linear systems to compute generative models, enhancing efficiency.
- Demonstrates applications in financial time series, turbulence, and image generation.
- Framework accommodates various feature maps, allowing for model combination.
- Addresses challenges in sample quality through innovative diffusion coefficient management.
Computer Science > Machine Learning arXiv:2602.20070 (cs) [Submitted on 23 Feb 2026] Title:Training-Free Generative Modeling via Kernelized Stochastic Interpolants Authors:Florentin Coeurdoux, Etienne Lempereur, Nathanaël Cuvelle-Magar, Thomas Eboli, Stéphane Mallat, Anastasia Borovykh, Eric Vanden-Eijnden View a PDF of the paper titled Training-Free Generative Modeling via Kernelized Stochastic Interpolants, by Florentin Coeurdoux and 6 other authors View PDF HTML (experimental) Abstract:We develop a kernel method for generative modeling within the stochastic interpolant framework, replacing neural network training with linear systems. The drift of the generative SDE is $\hat b_t(x) = \nabla\phi(x)^\top\eta_t$, where $\eta_t\in\R^P$ solves a $P\times P$ system computable from data, with $P$ independent of the data dimension $d$. Since estimates are inexact, the diffusion coefficient $D_t$ affects sample quality; the optimal $D_t^*$ from Girsanov diverges at $t=0$, but this poses no difficulty and we develop an integrator that handles it seamlessly. The framework accommodates diverse feature maps -- scattering transforms, pretrained generative models etc. -- enabling training-free generation and model combination. We demonstrate the approach on financial time series, turbulence, and image generation. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2602.20070 [cs.LG] (or arXiv:2602.20070v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2602.20070 Focus to le...