[2602.18253] MEG-to-MEG Transfer Learning and Cross-Task Speech/Silence Detection with Limited Data
Summary
This paper presents a novel approach to speech/silence detection using MEG-based models, demonstrating the effectiveness of transfer learning with limited data.
Why It Matters
The research addresses the challenge of data efficiency in speech brain-computer interfaces, showcasing how transfer learning can enhance model performance in both speech perception and production tasks. This has implications for improving assistive technologies and understanding neural processes involved in speech.
Key Takeaways
- Transfer learning improves accuracy in speech tasks by 1-6%.
- Pre-training on extensive listening data enhances performance on limited production data.
- Models trained on production tasks can decode listening data, indicating shared neural representations.
Computer Science > Machine Learning arXiv:2602.18253 (cs) [Submitted on 20 Feb 2026] Title:MEG-to-MEG Transfer Learning and Cross-Task Speech/Silence Detection with Limited Data Authors:Xabier de Zuazo, Vincenzo Verbeni, Eva Navas, Ibon Saratxaga, Mathieu Bourguignon, Nicola Molinaro View a PDF of the paper titled MEG-to-MEG Transfer Learning and Cross-Task Speech/Silence Detection with Limited Data, by Xabier de Zuazo and 5 other authors View PDF HTML (experimental) Abstract:Data-efficient neural decoding is a central challenge for speech brain-computer interfaces. We present the first demonstration of transfer learning and cross-task decoding for MEG-based speech models spanning perception and production. We pre-train a Conformer-based model on 50 hours of single-subject listening data and fine-tune on just 5 minutes per subject across 18 participants. Transfer learning yields consistent improvements, with in-task accuracy gains of 1-4% and larger cross-task gains of up to 5-6%. Not only does pre-training improve performance within each task, but it also enables reliable cross-task decoding between perception and production. Critically, models trained on speech production decode passive listening above chance, confirming that learned representations reflect shared neural processes rather than task-specific motor activity. Comments: Subjects: Machine Learning (cs.LG) MSC classes: 68T07 (Primary), 62H30 (Secondary) ACM classes: I.2.6; I.5.4 Cite as: arXiv:2602.18253 [cs.LG...