[2602.20070] Training-Free Generative Modeling via Kernelized Stochastic Interpolants

[2602.20070] Training-Free Generative Modeling via Kernelized Stochastic Interpolants

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a novel kernel method for generative modeling that eliminates the need for training neural networks, utilizing linear systems instead.

Why It Matters

The approach offers a significant advancement in generative modeling by enabling training-free generation, which can streamline workflows in various applications such as financial modeling and image generation. This could democratize access to generative AI technologies, making them more efficient and accessible.

Key Takeaways

  • Introduces a kernel method for generative modeling without neural network training.
  • Utilizes linear systems to compute generative models, enhancing efficiency.
  • Demonstrates applications in financial time series, turbulence, and image generation.
  • Framework accommodates various feature maps, allowing for model combination.
  • Addresses challenges in sample quality through innovative diffusion coefficient management.

Computer Science > Machine Learning arXiv:2602.20070 (cs) [Submitted on 23 Feb 2026] Title:Training-Free Generative Modeling via Kernelized Stochastic Interpolants Authors:Florentin Coeurdoux, Etienne Lempereur, Nathanaël Cuvelle-Magar, Thomas Eboli, Stéphane Mallat, Anastasia Borovykh, Eric Vanden-Eijnden View a PDF of the paper titled Training-Free Generative Modeling via Kernelized Stochastic Interpolants, by Florentin Coeurdoux and 6 other authors View PDF HTML (experimental) Abstract:We develop a kernel method for generative modeling within the stochastic interpolant framework, replacing neural network training with linear systems. The drift of the generative SDE is $\hat b_t(x) = \nabla\phi(x)^\top\eta_t$, where $\eta_t\in\R^P$ solves a $P\times P$ system computable from data, with $P$ independent of the data dimension $d$. Since estimates are inexact, the diffusion coefficient $D_t$ affects sample quality; the optimal $D_t^*$ from Girsanov diverges at $t=0$, but this poses no difficulty and we develop an integrator that handles it seamlessly. The framework accommodates diverse feature maps -- scattering transforms, pretrained generative models etc. -- enabling training-free generation and model combination. We demonstrate the approach on financial time series, turbulence, and image generation. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2602.20070 [cs.LG]   (or arXiv:2602.20070v1 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2602.20070 Focus to le...

Related Articles

Machine Learning

[D] Budget Machine Learning Hardware

Looking to get into machine learning and found this video on a piece of hardware for less than £500. Is it really possible to teach auton...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

Your prompts aren’t the problem — something else is

I keep seeing people focus heavily on prompt optimization. But in practice, a lot of failures I’ve observed don’t come from the prompt it...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime