BeClaude
Research2026-05-12

Revis: Sparse Latent Steering to Mitigate Object Hallucination in Large Vision-Language Models

Source: Arxiv CS.AI

arXiv:2602.11824v2 Announce Type: replace Abstract: Despite the advanced capabilities of Large Vision-Language Models (LVLMs), they frequently suffer from object hallucination. One reason is that visual features and pretrained textual representations often become intertwined in the deeper network...

arxivpapersvision