Research2026-05-11
Qwen3-VL-Seg: Unlocking Open-World Referring Segmentation with Vision-Language Grounding
Source: Arxiv CS.AI
arXiv:2605.07141v1 Announce Type: cross Abstract: Open-world referring segmentation requires grounding unconstrained language expressions to precise pixel-level regions. Existing multimodal large language models (MLLMs) exhibit strong open-world visual grounding, but their outputs remain limited to...
arxivpapersvision