[2602.13325] Graph neural networks uncover structure and functions underlying the activity of simulated neural assemblies
Summary
This article discusses how graph neural networks can effectively analyze and interpret the dynamics of simulated neural assemblies, revealing underlying structures and functions.
Why It Matters
Understanding the mechanisms of neural assemblies is crucial for advancements in neuroscience and artificial intelligence. This study offers a novel approach that enhances interpretability in machine learning, which is often lacking in traditional models. By providing insights into neural connectivity and signaling, it bridges the gap between complex neural dynamics and machine learning applications.
Key Takeaways
- Graph neural networks can decompose neural activity into interpretable representations.
- The method reveals neural connectivity, neuron types, and signaling functions.
- This approach offers better interpretability compared to traditional models like RNNs and transformers.
- The findings can enhance our understanding of neural dynamics in both biological and artificial systems.
- The study demonstrates the potential for improved predictive accuracy alongside interpretability.
Quantitative Biology > Neurons and Cognition arXiv:2602.13325 (q-bio) [Submitted on 11 Feb 2026] Title:Graph neural networks uncover structure and functions underlying the activity of simulated neural assemblies Authors:Cédric Allier, Larissa Heinrich, Magdalena Schneider, Stephan Saalfeld View a PDF of the paper titled Graph neural networks uncover structure and functions underlying the activity of simulated neural assemblies, by C\'edric Allier and 3 other authors View PDF HTML (experimental) Abstract:Graph neural networks trained to predict observable dynamics can be used to decompose the temporal activity of complex heterogeneous systems into simple, interpretable representations. Here we apply this framework to simulated neural assemblies with thousands of neurons and demonstrate that it can jointly reveal the connectivity matrix, the neuron types, the signaling functions, and in some cases hidden external stimuli. In contrast to existing machine learning approaches such as recurrent neural networks and transformers, which emphasize predictive accuracy but offer limited interpretability, our method provides both reliable forecasts of neural activity and interpretable decomposition of the mechanisms governing large neural assemblies. Subjects: Neurons and Cognition (q-bio.NC); Machine Learning (cs.LG) Cite as: arXiv:2602.13325 [q-bio.NC] (or arXiv:2602.13325v1 [q-bio.NC] for this version) https://doi.org/10.48550/arXiv.2602.13325 Focus to learn more arXiv-issued DO...