[2601.17641] RPNT: Robust Pre-trained Neural Transformer -- A Pathway for Generalized Motor Decoding
About this article
Abstract page for arXiv paper 2601.17641: RPNT: Robust Pre-trained Neural Transformer -- A Pathway for Generalized Motor Decoding
Computer Science > Machine Learning arXiv:2601.17641 (cs) [Submitted on 25 Jan 2026 (v1), last revised 1 Apr 2026 (this version, v2)] Title:RPNT: Robust Pre-trained Neural Transformer -- A Pathway for Generalized Motor Decoding Authors:Hao Fang, Ryan A. Canfield, Tomohiro Ouchi, Beatrice Macagno, Eli Shlizerman, Amy L. Orsborn View a PDF of the paper titled RPNT: Robust Pre-trained Neural Transformer -- A Pathway for Generalized Motor Decoding, by Hao Fang and 5 other authors View PDF HTML (experimental) Abstract:Brain motor decoding aims to interpret and translate neural activity into behaviors. Decoding models should generalize across variations, such as recordings from different brain sites, experimental sessions, behavior types, and subjects, will be critical for real-world applications. Current decoding models only partially address these challenges. In this work, we develop a pretrained neural transformer model, RPNT - Robust Pretrained Neural Transformer, designed to achieve robust generalization through pretraining, which in turn enables effective finetuning for downstream motor decoding tasks. We achieved the proposed RPNT architecture by systematically investigating which transformer building blocks could be suitable for neural spike activity modeling, since components from models developed for other modalities, such as text and images, do not transfer directly to neural data. The final RPNT architecture incorporates three unique enabling components: 1) Multidime...