[2602.17706] Parallel Complex Diffusion for Scalable Time Series Generation

[2602.17706] Parallel Complex Diffusion for Scalable Time Series Generation

arXiv - Machine Learning 4 min read Article

Summary

The paper presents PaCoDi, a novel approach to time series generation using parallel complex diffusion, enhancing efficiency and quality in modeling long-range dependencies.

Why It Matters

This research addresses critical limitations in traditional time series generation models, particularly regarding computational efficiency and representational capacity. By introducing a new architecture that operates in the frequency domain, it provides a scalable solution that could significantly impact various applications in machine learning and data analysis.

Key Takeaways

  • PaCoDi decouples generative modeling in the frequency domain, enhancing computational efficiency.
  • The model achieves a 50% reduction in attention FLOPs without loss of information.
  • Extensive experiments demonstrate superior generation quality and inference speed compared to existing models.
  • Theoretical foundations include Quadrature Forward Diffusion and Conditional Reverse Factorization theorems.
  • The approach effectively handles non-isotropic noise distributions through a novel Heteroscedastic Loss.

Computer Science > Machine Learning arXiv:2602.17706 (cs) [Submitted on 10 Feb 2026] Title:Parallel Complex Diffusion for Scalable Time Series Generation Authors:Rongyao Cai, Yuxi Wan, Kexin Zhang, Ming Jin, Zhiqiang Ge, Qingsong Wen, Yong Liu View a PDF of the paper titled Parallel Complex Diffusion for Scalable Time Series Generation, by Rongyao Cai and 6 other authors View PDF HTML (experimental) Abstract:Modeling long-range dependencies in time series generation poses a fundamental trade-off between representational capacity and computational efficiency. Traditional temporal diffusion models suffer from local entanglement and the $\mathcal{O}(L^2)$ cost of attention mechanisms. We address these limitations by introducing PaCoDi (Parallel Complex Diffusion), a spectral-native architecture that decouples generative modeling in the frequency domain. PaCoDi fundamentally alters the problem topology: the Fourier Transform acts as a diagonalizing operator, converting locally coupled temporal signals into globally decorrelated spectral components. Theoretically, we prove the Quadrature Forward Diffusion and Conditional Reverse Factorization theorem, demonstrating that the complex diffusion process can be split into independent real and imaginary branches. We bridge the gap between this decoupled theory and data reality using a \textbf{Mean Field Theory (MFT) approximation} reinforced by an interactive correction mechanism. Furthermore, we generalize this discrete DDPM to cont...

Related Articles

Machine Learning

[D] ICML 2026 Average Score

Hi all, I’m curious about the current review dynamics for ICML 2026, especially after the rebuttal phase. For those who are reviewers (or...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] VOID: Video Object and Interaction Deletion (physically-consistent video inpainting)

We present VOID, a model for video object removal that aims to handle *physical interactions*, not just appearance. Most existing video i...

Reddit - Machine Learning · 1 min ·
Machine Learning

FLUX 2 Pro (2026) Sketch to Image

I sketched a cow and tested how different models interpret it into a realistic image for downstream 3D generation, turns out some models ...

Reddit - Artificial Intelligence · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime