[2602.21522] One Brain, Omni Modalities: Towards Unified Non-Invasive Brain Decoding with Large Language Models

[2602.21522] One Brain, Omni Modalities: Towards Unified Non-Invasive Brain Decoding with Large Language Models

arXiv - AI 4 min read Article

Summary

The paper introduces NOBEL, a large language model that unifies non-invasive brain decoding by integrating EEG, MEG, and fMRI signals, enhancing brain activity interpretation.

Why It Matters

This research addresses the fragmentation in brain decoding methodologies by proposing a unified approach that leverages diverse neural signals. It holds potential for advancing neuroscience and improving our understanding of brain functions, which could have implications for clinical applications and artificial intelligence.

Key Takeaways

  • NOBEL integrates EEG, MEG, and fMRI signals for improved brain decoding.
  • The model demonstrates higher accuracy in decoding compared to unimodal approaches.
  • It effectively links sensory stimuli to neural responses, enhancing understanding of brain activity.
  • The research highlights the complementary nature of different neural modalities.
  • NOBEL serves as a robust tool for both single-modal and multi-modal brain analysis.

Quantitative Biology > Neurons and Cognition arXiv:2602.21522 (q-bio) [Submitted on 25 Feb 2026] Title:One Brain, Omni Modalities: Towards Unified Non-Invasive Brain Decoding with Large Language Models Authors:Changli Tang, Shurui Li, Junliang Wang, Qinfan Xiao, Zhonghao Zhai, Lei Bai, Yu Qiao, Bowen Zhou, Wen Wu, Yuanning Li, Chao Zhang View a PDF of the paper titled One Brain, Omni Modalities: Towards Unified Non-Invasive Brain Decoding with Large Language Models, by Changli Tang and 10 other authors View PDF HTML (experimental) Abstract:Deciphering brain function through non-invasive recordings requires synthesizing complementary high-frequency electromagnetic (EEG/MEG) and low-frequency metabolic (fMRI) signals. However, despite their shared neural origins, extreme discrepancies have traditionally confined these modalities to isolated analysis pipelines, hindering a holistic interpretation of brain activity. To bridge this fragmentation, we introduce \textbf{NOBEL}, a \textbf{n}euro-\textbf{o}mni-modal \textbf{b}rain-\textbf{e}ncoding \textbf{l}arge language model (LLM) that unifies these heterogeneous signals within the LLM's semantic embedding space. Our architecture integrates a unified encoder for EEG and MEG with a novel dual-path strategy for fMRI, aligning non-invasive brain signals and external sensory stimuli into a shared token space, then leverages an LLM as a universal backbone. Extensive evaluations demonstrate that NOBEL serves as a robust generalist acro...

Related Articles

Llms

I built a Star Trek LCARS terminal that reads your entire AI coding setup

Side project that got out of hand. It's a dashboard for Claude Code that scans your ~/.claude/ directory and renders everything as a TNG ...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] Is autoresearch really better than classic hyperparameter tuning?

We did experiments comparing Optuna & autoresearch. Autoresearch converges faster, is more cost-efficient, and even generalizes bette...

Reddit - Machine Learning · 1 min ·
Llms

Claude Source Code?

Has anyone been able to successfully download the leaked source code yet? I've not been able to find it. If anyone has, please reach out....

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] Solving the Jane Street Dormant LLM Challenge: A Systematic Approach to Backdoor Discovery

Submitted by: Adam Kruger Date: March 23, 2026 Models Solved: 3/3 (M1, M2, M3) + Warmup Background When we first encountered the Jane Str...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime