[2512.09530] Transformers for Tabular Data: A Training Perspective of Self-Attention via Optimal Transport
Summary
This paper explores self-attention training for tabular data using Optimal Transport (OT), presenting a novel OT-based algorithm that enhances classification accuracy while reducing computational costs.
Why It Matters
The research addresses the limitations of traditional self-attention mechanisms in tabular data classification, providing a more efficient training approach that could enhance machine learning applications in various fields, particularly in data-intensive domains like healthcare.
Key Takeaways
- Introduces an OT-based alternative for training self-attention in tabular classification.
- Demonstrates that final self-attention mappings can approximate OT optimal couplings.
- Presents a novel algorithm that generates class-specific dummy Gaussian distributions for improved training.
- Achieves competitive accuracy with reduced computational costs compared to traditional Transformers.
- Highlights the importance of careful design in dummy-geometry for optimal performance.
Statistics > Machine Learning arXiv:2512.09530 (stat) [Submitted on 10 Dec 2025 (v1), last revised 18 Feb 2026 (this version, v2)] Title:Transformers for Tabular Data: A Training Perspective of Self-Attention via Optimal Transport Authors:Alessandro Quadrio, Antonio Candelieri View a PDF of the paper titled Transformers for Tabular Data: A Training Perspective of Self-Attention via Optimal Transport, by Alessandro Quadrio and 1 other authors View PDF HTML (experimental) Abstract:This thesis examines self-attention training through the lens of Optimal Transport (OT) and develops an OT-based alternative for tabular classification. The study tracks intermediate projections of the self-attention layer during training and evaluates their evolution using discrete OT metrics, including Wasserstein distance, Monge gap, optimality, and efficiency. Experiments are conducted on classification tasks with two and three classes, as well as on a biomedical dataset. Results indicate that the final self-attention mapping often approximates the OT optimal coupling, yet the training trajectory remains inefficient. Pretraining the MLP section on synthetic data partially improves convergence but is sensitive to their initialization. To address these limitations, an OT-based algorithm is introduced: it generates class-specific dummy Gaussian distributions, computes an OT alignment with the data, and trains an MLP to generalize this mapping. The method achieves accuracy comparable to Transformer...