[2602.09437] Diffusion-Guided Pretraining for Brain Graph Foundation Models
Summary
The paper presents a diffusion-guided pretraining framework for brain graph models, addressing limitations in existing methods for learning from connectome data.
Why It Matters
This research is significant as it enhances the understanding of brain connectivity through improved representation learning, which could lead to better insights in neuroimaging and mental health studies. The proposed framework aims to preserve semantic connectivity, making it a valuable contribution to the field of machine learning in neuroscience.
Key Takeaways
- Proposes a novel diffusion-based pretraining framework for brain graphs.
- Addresses limitations of existing augmentation methods that disrupt semantic connectivity.
- Demonstrates improved performance across multiple neuroimaging datasets.
- Enables topology-aware graph-level readout and node-level reconstruction.
- Utilizes extensive data from over 25,000 subjects to validate findings.
Computer Science > Machine Learning arXiv:2602.09437 (cs) This paper has been withdrawn by Xinxu Wei [Submitted on 10 Feb 2026 (v1), last revised 19 Feb 2026 (this version, v2)] Title:Diffusion-Guided Pretraining for Brain Graph Foundation Models Authors:Xinxu Wei, Rong Zhou, Lifang He, Yu Zhang View a PDF of the paper titled Diffusion-Guided Pretraining for Brain Graph Foundation Models, by Xinxu Wei and 3 other authors No PDF available, click to view other formats Abstract:With the growing interest in foundation models for brain signals, graph-based pretraining has emerged as a promising paradigm for learning transferable representations from connectome data. However, existing contrastive and masked autoencoder methods typically rely on naive random dropping or masking for augmentation, which is ill-suited for brain graphs and hypergraphs as it disrupts semantically meaningful connectivity patterns. Moreover, commonly used graph-level readout and reconstruction schemes fail to capture global structural information, limiting the robustness of learned representations. In this work, we propose a unified diffusion-based pretraining framework that addresses both limitations. First, diffusion is designed to guide structure-aware dropping and masking strategies, preserving brain graph semantics while maintaining effective pretraining diversity. Second, diffusion enables topology-aware graph-level readout and node-level global reconstruction by allowing graph embeddings and mask...