[2602.16503] Interpretability-by-Design with Accurate Locally Additive Models and Conditional Feature Effects

[2602.16503] Interpretability-by-Design with Accurate Locally Additive Models and Conditional Feature Effects

arXiv - AI 3 min read Article

Summary

This paper introduces Conditionally Additive Local Models (CALMs), which enhance the interpretability of Generalized Additive Models (GAMs) while maintaining predictive accuracy by allowing multiple univariate shape functions that adapt to different regions of the input space.

Why It Matters

As machine learning models become more complex, the need for interpretability grows. CALMs provide a solution that balances accuracy and interpretability, making them valuable for practitioners who require both performance and transparency in their models, especially in sensitive applications.

Key Takeaways

  • CALMs improve upon GAMs by allowing multiple univariate shape functions for different input regions.
  • The model maintains interpretability while achieving accuracy comparable to more complex models like GA²Ms.
  • A distillation-based training pipeline is proposed to identify homogeneous regions with limited interactions.

Computer Science > Machine Learning arXiv:2602.16503 (cs) [Submitted on 18 Feb 2026] Title:Interpretability-by-Design with Accurate Locally Additive Models and Conditional Feature Effects Authors:Vasilis Gkolemis, Loukas Kavouras, Dimitrios Kyriakopoulos, Konstantinos Tsopelas, Dimitrios Rontogiannis, Giuseppe Casalicchio, Theodore Dalamagas, Christos Diou View a PDF of the paper titled Interpretability-by-Design with Accurate Locally Additive Models and Conditional Feature Effects, by Vasilis Gkolemis and 7 other authors View PDF HTML (experimental) Abstract:Generalized additive models (GAMs) offer interpretability through independent univariate feature effects but underfit when interactions are present in data. GA$^2$Ms add selected pairwise interactions which improves accuracy, but sacrifices interpretability and limits model auditing. We propose \emph{Conditionally Additive Local Models} (CALMs), a new model class, that balances the interpretability of GAMs with the accuracy of GA$^2$Ms. CALMs allow multiple univariate shape functions per feature, each active in different regions of the input space. These regions are defined independently for each feature as simple logical conditions (thresholds) on the features it interacts with. As a result, effects remain locally additive while varying across subregions to capture interactions. We further propose a principled distillation-based training pipeline that identifies homogeneous regions with limited interactions and fits ...

Related Articles

Machine Learning

Free tool I built to score dataset quality (LQS) — feedback welcome [D]

We built a Label Quality Score (LQS) system for our dataset marketplace and opened it up as a free standalone tool. Upload a dataset → ge...

Reddit - Machine Learning · 1 min ·
Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table | WIRED
Machine Learning

Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table | WIRED

Muse Spark is Meta’s first model since its AI reboot, and the benchmarks suggest formidable performance.

Wired - AI · 6 min ·
Machine Learning

Project Glasswing is inherently Cartel Behaviour

If the large companies always get access to the latest models first to "sure up cybersecurity" they will always have a head start on the ...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

ICML 2026 am I cooked? [D]

Hi, I am currently making the jump to ML from theoretical physics. I just got done with the review period, went from 4333 to 4433, but th...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime