Research2026-04-28
Characterizing Vision-Language-Action Models across XPUs: Constraints and Acceleration for On-Robot Deployment
Source: Arxiv CS.AI
arXiv:2604.24447v1 Announce Type: cross Abstract: Vision-Language-Action (VLA) models are promising for generalist robot control, but on-robot deployment is bottlenecked by real-time inference under tight cost and energy budgets. Most prior evaluations rely on desktop-grade GPUs, obscuring the...
arxivpapersvision