[2603.03080] Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation

[2603.03080] Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2603.03080: Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation

Computer Science > Artificial Intelligence arXiv:2603.03080 (cs) [Submitted on 3 Mar 2026] Title:Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation Authors:Chengkai Wang, Baisong Liu View a PDF of the paper titled Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation, by Chengkai Wang and Baisong Liu View PDF HTML (experimental) Abstract:LLM-based explainable recommenders can produce fluent explanations that are factually correct, yet still justify items using attributes that conflict with a user's historical preferences. Such preference-inconsistent explanations yield logically valid but unconvincing reasoning and are largely missed by standard hallucination or faithfulness metrics. We formalize this failure mode and propose PURE, a preference-aware reasoning framework following a select-then-generate paradigm. Instead of only improving generation, PURE intervenes in evidence selection, it selects a compact set of multi-hop item-centric reasoning paths that are both factually grounded and aligned with user preference structure, guided by user intent, specificity, and diversity to suppress generic, weakly personalized evidence. The selected evidence is then injected into LLM generation via structure-aware prompting that preserves relational constraints. To measure preference inconsistency, we introduce a feature-level, user-centric evaluation metric that reveals misal...

Originally published on March 04, 2026. Curated by AI News.

Related Articles

Gemini gets major upgrade towards interactive AI learning
Llms

Gemini gets major upgrade towards interactive AI learning

AI News - General · 3 min ·
Llms

8 free AI courses from Anthropic’s Claude platform with certificates

AI News - General ·
Llms

Anthropic launches Claude Managed Agents — composable APIs for shipping production AI agents 10x faster. Notion, Rakuten, Asana, and Sentry already in production.

Anthropic launches Claude Managed Agents in public beta — composable APIs for shipping production AI agents 10x faster Handles sandboxing...

Reddit - Artificial Intelligence · 1 min ·
Llms

6 Months Using AI for Actual Work: What's Incredible, What's Overhyped, and What's Quietly Dangerous

Six months ago I committed to using AI tools for everything I possibly could in my work. Every day, every task, every workflow. Here's th...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime