BeClaude
Research2026-04-20

VIB-Probe: Detecting and Mitigating Hallucinations in Vision-Language Models via Variational Information Bottleneck

Source: Arxiv CS.AI

arXiv:2601.05547v2 Announce Type: replace-cross Abstract: Vision-Language Models (VLMs) have demonstrated remarkable progress in multimodal tasks, but remain susceptible to hallucinations, where generated text deviates from the underlying visual content. Existing hallucination detection methods...

arxivpapersvision