[2602.22236] CrossLLM-Mamba: Multimodal State Space Fusion of LLMs for RNA Interaction Prediction

[2602.22236] CrossLLM-Mamba: Multimodal State Space Fusion of LLMs for RNA Interaction Prediction

arXiv - Machine Learning 4 min read Article

Summary

The article presents CrossLLM-Mamba, a novel framework for RNA interaction prediction that utilizes multimodal state space fusion of large language models, achieving state-of-the-art performance in various biological interaction categories.

Why It Matters

This research is significant as it addresses the limitations of existing RNA interaction prediction methods by introducing a dynamic modeling approach. This advancement could enhance our understanding of cellular regulation and improve drug discovery processes, making it relevant for both academic research and pharmaceutical applications.

Key Takeaways

  • CrossLLM-Mamba reformulates RNA interaction prediction as a state-space alignment problem.
  • The framework allows for dynamic modeling of interactions through hidden state propagation.
  • Achieves state-of-the-art performance on the RPI1460 benchmark with an MCC of 0.892.
  • Incorporates Gaussian noise injection and Focal Loss for improved robustness.
  • Demonstrates the potential of state-space modeling in multimodal biological predictions.

Quantitative Biology > Genomics arXiv:2602.22236 (q-bio) [Submitted on 23 Feb 2026] Title:CrossLLM-Mamba: Multimodal State Space Fusion of LLMs for RNA Interaction Prediction Authors:Rabeya Tus Sadia, Qiang Ye, Qiang Cheng View a PDF of the paper titled CrossLLM-Mamba: Multimodal State Space Fusion of LLMs for RNA Interaction Prediction, by Rabeya Tus Sadia and 2 other authors View PDF HTML (experimental) Abstract:Accurate prediction of RNA-associated interactions is essential for understanding cellular regulation and advancing drug discovery. While Biological Large Language Models (BioLLMs) such as ESM-2 and RiNALMo provide powerful sequence representations, existing methods rely on static fusion strategies that fail to capture the dynamic, context-dependent nature of molecular binding. We introduce CrossLLM-Mamba, a novel framework that reformulates interaction prediction as a state-space alignment problem. By leveraging bidirectional Mamba encoders, our approach enables deep ``crosstalk'' between modality-specific embeddings through hidden state propagation, modeling interactions as dynamic sequence transitions rather than static feature overlaps. The framework maintains linear computational complexity, making it scalable to high-dimensional BioLLM embeddings. We further incorporate Gaussian noise injection and Focal Loss to enhance robustness against hard-negative samples. Comprehensive experiments across three interaction categories, RNA-protein, RNA-small molecule, a...

Related Articles

Llms

[R] Depth-first pruning transfers: GPT-2 → TinyLlama with stable gains and minimal loss

TL;DR: Removing the right layers (instead of shrinking all layers) makes transformer models ~8–12% smaller with only ~6–8% quality loss, ...

Reddit - Machine Learning · 1 min ·
Llms

Built a training stability monitor that detects instability before your loss curve shows anything — open sourced the core today

Been working on a weight divergence trajectory curvature approach to detecting neural network training instability. Treats weight updates...

Reddit - Artificial Intelligence · 1 min ·
Llms

This Is Not Hacking. This Is Structured Intelligence.

Watch me demonstrate everything I've been talking about—live, in real time. The Setup: Maestro University AI enrollment system Standard c...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] Howcome Muon is only being used for Transformers?

Muon has quickly been adopted in LLM training, yet we don't see it being talked about in other contexts. Searches for Muon on ConvNets tu...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime