Research2026-04-28
Aligning with Your Own Voice: Self-Corrected Preference Learning for Hallucination Mitigation in LVLMs
Source: Arxiv CS.AI
arXiv:2604.24395v1 Announce Type: new Abstract: Large Vision-Language Models (LVLMs) frequently suffer from hallucinations. Existing preference learning-based approaches largely rely on proprietary models to construct preference datasets. We identify that this reliance introduces a distributional...
arxivpapers