[2602.18478] ZUNA: Flexible EEG Superresolution with Position-Aware Diffusion Autoencoders

[2602.18478] ZUNA: Flexible EEG Superresolution with Position-Aware Diffusion Autoencoders

arXiv - Machine Learning 3 min read Article

Summary

The paper presents ZUNA, a 380M-parameter masked diffusion autoencoder designed for EEG signal superresolution and channel infilling, demonstrating significant improvements over existing methods.

Why It Matters

ZUNA addresses critical challenges in EEG signal processing by enabling high-quality reconstruction of EEG data from incomplete measurements. This advancement has implications for both clinical and research settings, enhancing the utility of EEG in various applications.

Key Takeaways

  • ZUNA utilizes a 4D rotary positional encoding for EEG data processing.
  • The model is trained on a large dataset, improving generalization across different EEG datasets.
  • ZUNA outperforms traditional spherical-spline interpolation, especially under high dropout conditions.
  • The architecture is designed for practical deployment in real-world EEG analysis.
  • Apache-2.0 weights and preprocessing tools are made available to promote reproducibility.

Electrical Engineering and Systems Science > Signal Processing arXiv:2602.18478 (eess) [Submitted on 9 Feb 2026] Title:ZUNA: Flexible EEG Superresolution with Position-Aware Diffusion Autoencoders Authors:Christopher Warner, Jonas Mago, JR Huml, Mohamed Osman, Beren Millidge View a PDF of the paper titled ZUNA: Flexible EEG Superresolution with Position-Aware Diffusion Autoencoders, by Christopher Warner and 4 other authors View PDF HTML (experimental) Abstract:We present \texttt{ZUNA}, a 380M-parameter masked diffusion autoencoder trained to perform masked channel infilling and superresolution for arbitrary electrode numbers and positions in EEG signals. The \texttt{ZUNA} architecture tokenizes multichannel EEG into short temporal windows and injects spatiotemporal structure via a 4D rotary positional encoding over (x,y,z,t), enabling inference on arbitrary channel subsets and positions. We train ZUNA on an aggregated and harmonized corpus spanning 208 public datasets containing approximately 2 million channel-hours using a combined reconstruction and heavy channel-dropout objective. We show that \texttt{ZUNA} substantially improves over ubiquitous spherical-spline interpolation methods, with the gap widening at higher dropout rates. Crucially, compared to other deep learning methods in this space, \texttt{ZUNA}'s performance \emph{generalizes} across datasets and channel positions allowing it to be applied directly to novel datasets and problems. Despite its generative c...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Budget Machine Learning Hardware

Looking to get into machine learning and found this video on a piece of hardware for less than £500. Is it really possible to teach auton...

Reddit - Machine Learning · 1 min ·
Machine Learning

Your prompts aren’t the problem — something else is

I keep seeing people focus heavily on prompt optimization. But in practice, a lot of failures I’ve observed don’t come from the prompt it...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime