Research2026-04-20
Mechanisms of Prompt-Induced Hallucination in Vision-Language Models
Source: Arxiv CS.AI
arXiv:2601.05201v2 Announce Type: replace-cross Abstract: Large vision-language models (VLMs) are highly capable, yet often hallucinate by favoring textual prompts over visual evidence. We study this failure mode in a controlled object-counting setting, where the prompt overstates the number of...
arxivpaperspromptingvision