Research2026-05-06
VisInject: Disruption != Injection -- A Dual-Dimension Evaluation of Universal Adversarial Attacks on Vision-Language Models
Source: Arxiv CS.AI
arXiv:2605.01449v1 Announce Type: cross Abstract: Universal adversarial attacks on aligned multimodal large language models are increasingly reported with attack success rates in the 60-80% range, suggesting the visual modality is highly vulnerable to imperceptible perturbations as a...
arxivpapersvision