Research2026-05-14
Bridging the Missing-Modality Gap: Improving Text-Only Calibration of Vision Language Models
Source: Arxiv CS.AI
arXiv:2605.12517v1 Announce Type: cross Abstract: Vision-language models (VLMs) are often deployed on text-only inputs, although they are trained with images. We find that removing the vision modality causes large drops in accuracy and severe miscalibration, and the model does not behave like its...
arxivpapersvision