[2602.18253] MEG-to-MEG Transfer Learning and Cross-Task Speech/Silence Detection with Limited Data

[2602.18253] MEG-to-MEG Transfer Learning and Cross-Task Speech/Silence Detection with Limited Data

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a novel approach to speech/silence detection using MEG-based models, demonstrating the effectiveness of transfer learning with limited data.

Why It Matters

The research addresses the challenge of data efficiency in speech brain-computer interfaces, showcasing how transfer learning can enhance model performance in both speech perception and production tasks. This has implications for improving assistive technologies and understanding neural processes involved in speech.

Key Takeaways

  • Transfer learning improves accuracy in speech tasks by 1-6%.
  • Pre-training on extensive listening data enhances performance on limited production data.
  • Models trained on production tasks can decode listening data, indicating shared neural representations.

Computer Science > Machine Learning arXiv:2602.18253 (cs) [Submitted on 20 Feb 2026] Title:MEG-to-MEG Transfer Learning and Cross-Task Speech/Silence Detection with Limited Data Authors:Xabier de Zuazo, Vincenzo Verbeni, Eva Navas, Ibon Saratxaga, Mathieu Bourguignon, Nicola Molinaro View a PDF of the paper titled MEG-to-MEG Transfer Learning and Cross-Task Speech/Silence Detection with Limited Data, by Xabier de Zuazo and 5 other authors View PDF HTML (experimental) Abstract:Data-efficient neural decoding is a central challenge for speech brain-computer interfaces. We present the first demonstration of transfer learning and cross-task decoding for MEG-based speech models spanning perception and production. We pre-train a Conformer-based model on 50 hours of single-subject listening data and fine-tune on just 5 minutes per subject across 18 participants. Transfer learning yields consistent improvements, with in-task accuracy gains of 1-4% and larger cross-task gains of up to 5-6%. Not only does pre-training improve performance within each task, but it also enables reliable cross-task decoding between perception and production. Critically, models trained on speech production decode passive listening above chance, confirming that learned representations reflect shared neural processes rather than task-specific motor activity. Comments: Subjects: Machine Learning (cs.LG) MSC classes: 68T07 (Primary), 62H30 (Secondary) ACM classes: I.2.6; I.5.4 Cite as: arXiv:2602.18253 [cs.LG...

Related Articles

Llms

One of The Worst AI's I've Ever Seen

I'm using Gemini just for they gave us a student-free-pro pack. It can't see the images I sent, most of the time it just rewrites the mes...

Reddit - Artificial Intelligence · 1 min ·
Llms

Claude Opus 4.6 API at 40% below Anthropic pricing – try free before you pay anything

Hey everyone 👋 I've set up a self-hosted API gateway using New-API to manage and distribute Claude Opus 4.6 access across multiple users....

Reddit - Artificial Intelligence · 1 min ·
Llms

The open-source AI system that beat Claude Sonnet on a $500 GPU just shipped a coding assistant

A week or two ago, an open-source project called ATLAS made the rounds for scoring 74.6% on LiveCodeBench with a frozen 9B model on a sin...

Reddit - Artificial Intelligence · 1 min ·
Google quietly releases an offline-first AI dictation app on iOS | TechCrunch
Machine Learning

Google quietly releases an offline-first AI dictation app on iOS | TechCrunch

Google's new offline-first dictation app uses Gemma AI models to take on the apps like Wispr Flow.

TechCrunch - AI · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime