Research2026-05-12
Mitigating Many-shot Jailbreak Attacks with One Single Demonstration
Source: Arxiv CS.AI
arXiv:2605.08277v1 Announce Type: cross Abstract: Many-shot jailbreaking (MSJ) causes safety-aligned language models to answer harmful queries by preceding them with many harmful question-answer demonstrations. We study why this attack becomes stronger as the number of demonstrations increases....
arxivpapers