Research2026-04-22
ORCA: An Agentic Reasoning Framework for Hallucination and Adversarial Robustness in Vision-Language Models
Source: Arxiv CS.AI
arXiv:2509.15435v2 Announce Type: replace-cross Abstract: Large Vision-Language Models (LVLMs) exhibit strong multimodal capabilities but remain vulnerable to hallucinations from intrinsic errors and adversarial attacks from external exploitations, limiting their reliability in real-world...
arxivpapersreasoningagentsvision