BeClaude
Research2026-05-06

Gated Relational Alignment via Confidence-based Distillation for Efficient VLMs

Source: Arxiv CS.AI

arXiv:2601.22709v3 Announce Type: replace-cross Abstract: Vision-Language Models (VLMs) achieve strong multimodal performance but are costly to deploy, and post-training quantization often causes significant accuracy loss. Despite its potential, quantization-aware training for VLMs remains...

arxivpapers