BeClaude
Research2026-04-20

Learning Uncertainty from Sequential Internal Dispersion in Large Language Models

Source: Arxiv CS.AI

arXiv:2604.15741v1 Announce Type: cross Abstract: Uncertainty estimation is a promising approach to detect hallucinations in large language models (LLMs). Recent approaches commonly depend on model internal states to estimate uncertainty. However, they suffer from strict assumptions on how hidden...

arxivpapers