BeClaude
Research2026-04-22

Detecting Hallucinations in SpeechLLMs at Inference Time Using Attention Maps

Source: Arxiv CS.AI

arXiv:2604.19565v1 Announce Type: cross Abstract: Hallucinations in Speech Large Language Models (SpeechLLMs) pose significant risks, yet existing detection methods typically rely on gold-standard outputs that are costly or impractical to obtain. Moreover, hallucination detection methods developed...

arxivpapers