[2604.08574] Distilling Genomic Models for Efficient mRNA Representation Learning via Embedding Matching
About this article
Abstract page for arXiv paper 2604.08574: Distilling Genomic Models for Efficient mRNA Representation Learning via Embedding Matching
Computer Science > Machine Learning arXiv:2604.08574 (cs) [Submitted on 27 Mar 2026] Title:Distilling Genomic Models for Efficient mRNA Representation Learning via Embedding Matching Authors:Rasched Haidari, Sam Martin, Maxime Allard View a PDF of the paper titled Distilling Genomic Models for Efficient mRNA Representation Learning via Embedding Matching, by Rasched Haidari and 2 other authors View PDF HTML (experimental) Abstract:Large Genomic Foundation Models have recently achieved remarkable results and in-vivo translation capabilities. However these models quickly grow to over a few Billion of parameters and are expensive to run when compute is limited. To overcome this challenge, we present a distillation framework for transferring mRNA representations from a state of the art genomic foundation model into a much smaller model specialized for mRNA sequences, reducing the size by 200-fold. Embedding-level distillation worked better than logit based methods, which we found unstable. Benchmarking on mRNA-bench demonstrates that the distilled model achieves state-of-the-art performance among models of comparable size and competes with larger architectures for mRNA-related tasks. Our results highlight embedding-based distillation of mRNA sequences as an effective training strategy for biological foundation models. This enables similar efficient and scalable sequence modelling in genomics, particularly when large models are computationally challenging or infeasible. Comment...