[2603.03080] Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation
About this article
Abstract page for arXiv paper 2603.03080: Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation
Computer Science > Artificial Intelligence arXiv:2603.03080 (cs) [Submitted on 3 Mar 2026] Title:Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation Authors:Chengkai Wang, Baisong Liu View a PDF of the paper titled Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation, by Chengkai Wang and Baisong Liu View PDF HTML (experimental) Abstract:LLM-based explainable recommenders can produce fluent explanations that are factually correct, yet still justify items using attributes that conflict with a user's historical preferences. Such preference-inconsistent explanations yield logically valid but unconvincing reasoning and are largely missed by standard hallucination or faithfulness metrics. We formalize this failure mode and propose PURE, a preference-aware reasoning framework following a select-then-generate paradigm. Instead of only improving generation, PURE intervenes in evidence selection, it selects a compact set of multi-hop item-centric reasoning paths that are both factually grounded and aligned with user preference structure, guided by user intent, specificity, and diversity to suppress generic, weakly personalized evidence. The selected evidence is then injected into LLM generation via structure-aware prompting that preserves relational constraints. To measure preference inconsistency, we introduce a feature-level, user-centric evaluation metric that reveals misal...