Research2026-04-28
PARASITE: Conditional System Prompt Poisoning to Hijack LLMs
Source: Arxiv CS.AI
arXiv:2505.16888v4 Announce Type: replace-cross Abstract: Large Language Models (LLMs) are increasingly deployed via third-party system prompts downloaded from public marketplaces. We identify a critical supply-chain vulnerability: conditional system prompt poisoning, where an adversary injects a...
arxivpapersprompting