[2601.21331] Convex Loss Functions for Support Vector Machines (SVMs) and Neural Networks
Summary
This paper introduces a new convex loss function for Support Vector Machines (SVMs) and neural networks, demonstrating improved performance in classification and regression tasks through experimental validation.
Why It Matters
The development of more effective loss functions can significantly enhance the performance of machine learning models, particularly in SVMs and neural networks. This research addresses scalability issues and aims to improve generalization, which is crucial for real-world applications in various domains.
Key Takeaways
- Proposes a novel convex loss function for SVMs and neural networks.
- Demonstrates up to 2.0% improvement in F1 scores and 1.0% reduction in MSE.
- Highlights the importance of pattern correlations in enhancing generalization.
- Results indicate consistent performance improvements over standard losses.
- Research addresses scalability challenges in applying SVMs to larger datasets.
Computer Science > Machine Learning arXiv:2601.21331 (cs) [Submitted on 29 Jan 2026 (v1), last revised 25 Feb 2026 (this version, v3)] Title:Convex Loss Functions for Support Vector Machines (SVMs) and Neural Networks Authors:Filippo Portera View a PDF of the paper titled Convex Loss Functions for Support Vector Machines (SVMs) and Neural Networks, by Filippo Portera View PDF HTML (experimental) Abstract:We propose a new convex loss for Support Vector Machines, both for the binary classification and for the regression models. Therefore, we show the mathematical derivation of the dual problems and we experiment with them on several small datasets. The minimal dimension of those datasets is due to the difficult scalability of the SVM method to bigger instances. This preliminary study should prove that using pattern correlations inside the loss function could enhance the generalisation performances. Our method consistently achieved comparable or superior performance, with improvements of up to 2.0% in F1 scores for classification tasks and 1.0% reduction in Mean Squared Error (MSE) for regression tasks across various datasets, compared to standard losses. Coherently, results show that generalisation measures are never worse than the standard losses and several times they are better. In our opinion, it should be considered a careful study of this loss, coupled with shallow and deep neural networks. In fact, we present some novel results obtained with those architectures. Subje...