Research2026-05-14
Where Does Reasoning Break? Step-Level Hallucination Detection via Hidden-State Transport Geometry
Source: Arxiv CS.AI
arXiv:2605.13772v1 Announce Type: cross Abstract: Large language models hallucinate during multi-step reasoning, but most existing detectors operate at the trace level: they assign one confidence score to a full output, fail to localize the first error, and often require multiple sampled...
arxivpapersreasoning