[2602.21522] One Brain, Omni Modalities: Towards Unified Non-Invasive Brain Decoding with Large Language Models
Summary
The paper introduces NOBEL, a large language model that unifies non-invasive brain decoding by integrating EEG, MEG, and fMRI signals, enhancing brain activity interpretation.
Why It Matters
This research addresses the fragmentation in brain decoding methodologies by proposing a unified approach that leverages diverse neural signals. It holds potential for advancing neuroscience and improving our understanding of brain functions, which could have implications for clinical applications and artificial intelligence.
Key Takeaways
- NOBEL integrates EEG, MEG, and fMRI signals for improved brain decoding.
- The model demonstrates higher accuracy in decoding compared to unimodal approaches.
- It effectively links sensory stimuli to neural responses, enhancing understanding of brain activity.
- The research highlights the complementary nature of different neural modalities.
- NOBEL serves as a robust tool for both single-modal and multi-modal brain analysis.
Quantitative Biology > Neurons and Cognition arXiv:2602.21522 (q-bio) [Submitted on 25 Feb 2026] Title:One Brain, Omni Modalities: Towards Unified Non-Invasive Brain Decoding with Large Language Models Authors:Changli Tang, Shurui Li, Junliang Wang, Qinfan Xiao, Zhonghao Zhai, Lei Bai, Yu Qiao, Bowen Zhou, Wen Wu, Yuanning Li, Chao Zhang View a PDF of the paper titled One Brain, Omni Modalities: Towards Unified Non-Invasive Brain Decoding with Large Language Models, by Changli Tang and 10 other authors View PDF HTML (experimental) Abstract:Deciphering brain function through non-invasive recordings requires synthesizing complementary high-frequency electromagnetic (EEG/MEG) and low-frequency metabolic (fMRI) signals. However, despite their shared neural origins, extreme discrepancies have traditionally confined these modalities to isolated analysis pipelines, hindering a holistic interpretation of brain activity. To bridge this fragmentation, we introduce \textbf{NOBEL}, a \textbf{n}euro-\textbf{o}mni-modal \textbf{b}rain-\textbf{e}ncoding \textbf{l}arge language model (LLM) that unifies these heterogeneous signals within the LLM's semantic embedding space. Our architecture integrates a unified encoder for EEG and MEG with a novel dual-path strategy for fMRI, aligning non-invasive brain signals and external sensory stimuli into a shared token space, then leverages an LLM as a universal backbone. Extensive evaluations demonstrate that NOBEL serves as a robust generalist acro...