Research2026-04-24
When Prompts Override Vision: Prompt-Induced Hallucinations in LVLMs
Source: Arxiv CS.AI
arXiv:2604.21911v1 Announce Type: cross Abstract: Despite impressive progress in capabilities of large vision-language models (LVLMs), these systems remain vulnerable to hallucinations, i.e., outputs that are not grounded in the visual input. Prior work has attributed hallucinations in LVLMs to...
arxivpaperspromptingvision