Research2026-05-07
Multilingual Safety Alignment via Self-Distillation
Source: Arxiv CS.AI
arXiv:2605.02971v1 Announce Type: cross Abstract: Large language models (LLMs) exhibit severe multilingual safety misalignment: they possess strong safeguards in high-resource languages but remain highly vulnerable to jailbreak attacks in low-resource languages. Current safety alignment methods...
arxivpaperssafety