[2604.04270] A Logical-Rule Autoencoder for Interpretable Recommendations
About this article
Abstract page for arXiv paper 2604.04270: A Logical-Rule Autoencoder for Interpretable Recommendations
Computer Science > Information Retrieval arXiv:2604.04270 (cs) [Submitted on 5 Apr 2026] Title:A Logical-Rule Autoencoder for Interpretable Recommendations Authors:Jinhao Pan, Bowen Wei, Ziwei Zhu View a PDF of the paper titled A Logical-Rule Autoencoder for Interpretable Recommendations, by Jinhao Pan and 2 other authors View PDF HTML (experimental) Abstract:Most deep learning recommendation models operate as black boxes, relying on latent representations that obscure their decision process. This lack of intrinsic interpretability raises concerns in applications that require transparency and accountability. In this work, we propose a Logical-rule Interpretable Autoencoder (LIA) for collaborative filtering that is interpretable by design. LIA introduces a learnable logical rule layer in which each rule neuron is equipped with a gate parameter that automatically selects between AND and OR operators during training, enabling the model to discover diverse logical patterns directly from data. To support functional completeness without doubling the input dimensionality, LIA encodes negation through the sign of connection weights, providing a parameter-efficient mechanism for expressing both positive and negated item conditions within each rule. By learning explicit, human-readable reconstruction rules, LIA allows users to directly trace the decision process behind each recommendation. Extensive experiments show that our method achieves improved recommendation performance over t...