BeClaude
Research2026-04-24

VLA-Forget: Vision-Language-Action Unlearning for Embodied Foundation Models

Source: Arxiv CS.AI

arXiv:2604.03956v2 Announce Type: replace-cross Abstract: Vision-language-action (VLA) models are emerging as embodied foundation models for robotic manipulation, but their deployment introduces a new unlearning challenge: removing unsafe, spurious, or privacy-sensitive behaviors without degrading...

arxivpapersvision