Research2026-04-22
FaithLens: Detecting and Explaining Faithfulness Hallucination
Source: Arxiv CS.AI
arXiv:2512.20182v4 Announce Type: replace-cross Abstract: Recognizing whether outputs from large language models (LLMs) contain faithfulness hallucination is crucial for real-world applications, e.g., retrieval-augmented generation and summarization. In this paper, we introduce FaithLens, a...
arxivpapers