[2510.10063] CLMN: Concept based Language Models via Neural Symbolic Reasoning
About this article
Abstract page for arXiv paper 2510.10063: CLMN: Concept based Language Models via Neural Symbolic Reasoning
Computer Science > Computation and Language arXiv:2510.10063 (cs) [Submitted on 11 Oct 2025 (v1), last revised 30 Mar 2026 (this version, v2)] Title:CLMN: Concept based Language Models via Neural Symbolic Reasoning Authors:Yibo Yang View a PDF of the paper titled CLMN: Concept based Language Models via Neural Symbolic Reasoning, by Yibo Yang View PDF HTML (experimental) Abstract:Deep learning has advanced NLP, but interpretability remains limited, especially in healthcare and finance. Concept bottleneck models tie predictions to human concepts in vision, but NLP versions either use binary activations that harm text representations or latent concepts that weaken semantics, and they rarely model dynamic concept interactions such as negation and context. We introduce the Concept Language Model Network (CLMN), a neural-symbolic framework that keeps both performance and interpretability. CLMN represents concepts as continuous, human-readable embeddings and applies fuzzy-logic reasoning to learn adaptive interaction rules that state how concepts affect each other and the final decision. The model augments original text features with concept-aware representations and automatically induces interpretable logic rules. Across multiple datasets and pre-trained language models, CLMN achieves higher accuracy than existing concept-based methods while improving explanation quality. These results show that integrating neural representations with symbolic reasoning in a unified concept spac...