Research2026-04-28
Global Context or Local Detail? Adaptive Visual Grounding for Hallucination Mitigation
Source: Arxiv CS.AI
arXiv:2604.24396v1 Announce Type: cross Abstract: Vision-Language Models (VLMs) are frequently undermined by object hallucination--generating content that contradicts visual reality--due to an over-reliance on linguistic priors. We introduce Positive-and-Negative Decoding (PND), a training-free...
arxivpapers