[2602.15877] Genetic Generalized Additive Models
Summary
This article presents Genetic Generalized Additive Models (GGAMs), which utilize a multi-objective genetic algorithm to optimize model structure, enhancing predictive accuracy and interpretability while reducing complexity.
Why It Matters
The development of GGAMs addresses the challenge of manually configuring Generalized Additive Models, making it easier for practitioners to achieve high-performing models that are also interpretable. This is crucial in fields where model transparency is essential for trust and decision-making.
Key Takeaways
- GGAMs optimize model structure using the NSGA-II genetic algorithm.
- They balance predictive accuracy with interpretability and complexity.
- Experiments show GGAMs outperform baseline models in accuracy with lower complexity.
- The framework enhances model transparency, crucial for practical applications.
- Code for the models is publicly available, promoting further research.
Computer Science > Machine Learning arXiv:2602.15877 (cs) [Submitted on 2 Feb 2026] Title:Genetic Generalized Additive Models Authors:Kaaustaaub Shankar, Kelly Cohen View a PDF of the paper titled Genetic Generalized Additive Models, by Kaaustaaub Shankar and 1 other authors View PDF HTML (experimental) Abstract:Generalized Additive Models (GAMs) balance predictive accuracy and interpretability, but manually configuring their structure is challenging. We propose using the multi-objective genetic algorithm NSGA-II to automatically optimize GAMs, jointly minimizing prediction error (RMSE) and a Complexity Penalty that captures sparsity, smoothness, and uncertainty. Experiments on the California Housing dataset show that NSGA-II discovers GAMs that outperform baseline LinearGAMs in accuracy or match performance with substantially lower complexity. The resulting models are simpler, smoother, and exhibit narrower confidence intervals, enhancing interpretability. This framework provides a general approach for automated optimization of transparent, high-performing models. The code can be found at this https URL. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE) Cite as: arXiv:2602.15877 [cs.LG] (or arXiv:2602.15877v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2602.15877 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Kaaustaaub Shankar [view e...