Research2026-04-20
HarmfulSkillBench: How Do Harmful Skills Weaponize Your Agents?
Source: Arxiv CS.AI
arXiv:2604.15415v1 Announce Type: cross Abstract: Large language models (LLMs) have evolved into autonomous agents that rely on open skill ecosystems (e.g., ClawHub and Skills.Rest), hosting numerous publicly reusable skills. Existing security research on these ecosystems mainly focuses on...
arxivpapersagents