BeClaude
Research2026-05-06

VAUQ: Vision-Aware Uncertainty Quantification for LVLM Self-Evaluation

Source: Arxiv CS.AI

arXiv:2602.21054v2 Announce Type: replace-cross Abstract: Large Vision-Language Models (LVLMs) frequently hallucinate, limiting their safe deployment in real-world applications. Existing LLM self-evaluation methods rely on a model's ability to estimate the correctness of its own outputs, which can...

arxivpapersvision