Research2026-05-12
How LLMs Are Persuaded: A Few Attention Heads, Rerouted
Source: Arxiv CS.AI
arXiv:2605.09314v1 Announce Type: new Abstract: Language models can be persuaded to abandon factual knowledge. This vulnerability is central to AI safety, but its internal mechanism remains poorly understood. We uncover a compact causal mechanism for persuasion-induced factual errors. A small set...
arxivpapers