Research2026-05-12
CoWorld-VLA: Thinking in a Multi-Expert World Model for Autonomous Driving
Source: Arxiv CS.AI
arXiv:2605.10426v1 Announce Type: cross Abstract: Vision-Language-Action (VLA) models have emerged as a promising paradigm for end-to-end autonomous driving. However, existing reasoning mechanisms still struggle to provide planning-oriented intermediate representations: textual Chain-of-Thought...
arxivpapers