BeClaude
Research2026-05-08

Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation

Source: Arxiv CS.AI

arXiv:2603.03080v2 Announce Type: replace Abstract: LLM-based explainable recommenders can produce fluent explanations that are factually correct, yet still justify items using attributes that conflict with a user's historical preferences. Such preference-inconsistent explanations yield logically...

arxivpapers