Research2026-05-06
LLM-VA: Resolving the Jailbreak-Overrefusal Trade-off via Vector Alignment
Source: Arxiv CS.AI
arXiv:2601.19487v2 Announce Type: replace-cross Abstract: Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector steering methods adjust the magnitude of answer vectors, but this creates a fundamental...
arxivpapers