[2603.26801] Sparse-by-Design Cross-Modality Prediction: L0-Gated Representations for Reliable and Efficient Learning
About this article
Abstract page for arXiv paper 2603.26801: Sparse-by-Design Cross-Modality Prediction: L0-Gated Representations for Reliable and Efficient Learning
Computer Science > Machine Learning arXiv:2603.26801 (cs) [Submitted on 26 Mar 2026] Title:Sparse-by-Design Cross-Modality Prediction: L0-Gated Representations for Reliable and Efficient Learning Authors:Filippo Cenacchi View a PDF of the paper titled Sparse-by-Design Cross-Modality Prediction: L0-Gated Representations for Reliable and Efficient Learning, by Filippo Cenacchi View PDF HTML (experimental) Abstract:Predictive systems increasingly span heterogeneous modalities such as graphs, language, and tabular records, but sparsity and efficiency remain modality-specific (graph edge or neighborhood sparsification, Transformer head or layer pruning, and separate tabular feature-selection pipelines). This fragmentation makes results hard to compare, complicates deployment, and weakens reliability analysis across end-to-end KDD pipelines. A unified sparsification primitive would make accuracy-efficiency trade-offs comparable across modalities and enable controlled reliability analysis under representation compression. We ask whether a single representation-level mechanism can yield comparable accuracy-efficiency trade-offs across modalities while preserving or improving probability calibration. We propose L0-Gated Cross-Modality Learning (L0GM), a modality-agnostic, feature-wise hard-concrete gating framework that enforces L0-style sparsity directly on learned representations. L0GM attaches hard-concrete stochastic gates to each modality's classifier-facing interface: node em...