[2510.25126] Bridging the Divide: End-to-End Sequence-Graph Learning
About this article
Abstract page for arXiv paper 2510.25126: Bridging the Divide: End-to-End Sequence-Graph Learning
Computer Science > Machine Learning arXiv:2510.25126 (cs) [Submitted on 29 Oct 2025 (v1), last revised 1 Apr 2026 (this version, v2)] Title:Bridging the Divide: End-to-End Sequence-Graph Learning Authors:Yuen Chen, Yulun Wu, Samuel Sharpe, Igor Melnyk, Nam H. Nguyen, Furong Huang, C. Bayan Bruss, Rizal Fathony View a PDF of the paper titled Bridging the Divide: End-to-End Sequence-Graph Learning, by Yuen Chen and 7 other authors View PDF Abstract:Many real-world prediction tasks, particularly those involving entities such as customers or patients, involve both {sequential} and {relational} data. Each entity maintains its own sequence of events while simultaneously engaging in relationships with others. Existing methods in sequence and graph modeling often overlook one modality in favor of the other. We argue that these two facets should instead be integrated and learned jointly. We introduce BRIDGE, a unified end-to-end architecture that couples a sequence model with a graph module under a single objective, allowing gradients to flow across both components to learn task-aligned representations. To enable fine-grained interaction, we propose TOKENXATTN, a token-level cross-attention layer that facilitates message passing between specific events in neighboring sequences. Across two settings, relationship prediction and fraud detection, BRIDGE consistently outperforms static graph models, temporal graph methods, as well as sequence-only baselines on both ranking and classific...