Research2026-05-06
jina-vlm: Small Multilingual Vision Language Model
Source: Arxiv CS.AI
arXiv:2512.04032v3 Announce Type: replace-cross Abstract: We present jina-vlm, a token-efficient 2.4B parameter vision-language model that achieves state-of-the-art multilingual VQA performance among open 2B-scale VLMs. The model couples a SigLIP2 vision encoder with a Qwen3 language decoder and...
arxivpapersvision