[2507.01781] Symbolic Branch Networks: Tree-Inherited Neural Models for Interpretable Multiclass Classification
Summary
This article presents Symbolic Branch Networks (SBNs), a novel neural model that integrates decision tree structures for enhanced interpretability in multiclass classification tasks.
Why It Matters
The development of SBNs is significant as it combines the strengths of neural networks with the transparency of decision trees, addressing the growing demand for interpretable AI models. This is crucial for applications where understanding model decisions is as important as accuracy.
Key Takeaways
- SBNs leverage tree structures to maintain interpretability while enhancing predictive performance.
- The model allows for gradient-based learning, refining feature relevance without losing symbolic integrity.
- SBNs consistently outperform traditional models like XGBoost on various multiclass datasets.
- The research highlights the potential of combining symbolic structures with neural optimization.
- SBN* variant shows competitive performance even with fixed symbolic parameters, emphasizing robustness.
Computer Science > Machine Learning arXiv:2507.01781 (cs) [Submitted on 2 Jul 2025 (v1), last revised 22 Feb 2026 (this version, v2)] Title:Symbolic Branch Networks: Tree-Inherited Neural Models for Interpretable Multiclass Classification Authors:Dalia Rodríguez-Salas View a PDF of the paper titled Symbolic Branch Networks: Tree-Inherited Neural Models for Interpretable Multiclass Classification, by Dalia Rodr\'iguez-Salas View PDF HTML (experimental) Abstract:Symbolic Branch Networks (SBNs) are neural models whose architecture is inherited directly from an ensemble of decision trees. Each root-to-parent-of-leaf decision path is mapped to a hidden neuron, and the matrices $W_{1}$ (feature-to-branch) and $W_{2}$ (branch-to-class) encode the symbolic structure of the ensemble. Because these matrices originate from the trees, SBNs preserve transparent feature relevance and branch-level semantics while enabling gradient-based learning. The primary contribution of this work is SBN, a semi-symbolic variant that preserves branch semantics by keeping $W_{2}$ fixed, while allowing $W_{1}$ to be refined through learning. This controlled relaxation improves predictive accuracy without altering the underlying symbolic structure. Across 28 multiclass tabular datasets from the OpenML CC-18 benchmark, SBN consistently matches or surpasses XGBoost while retaining human-interpretable branch attributions. We also analyze SBN*, a fully symbolic variant in which both $W_{1}$ and $W_{2}$ are f...