Research2026-05-08
Safety Anchor: Defending Harmful Fine-tuning via Geometric Bottlenecks
Source: Arxiv CS.AI
arXiv:2605.05995v1 Announce Type: cross Abstract: The safety alignment of Large Language Models (LLMs) remains vulnerable to Harmful Fine-tuning (HFT). While existing defenses impose constraints on parameters, gradients, or internal representations, we observe that they can be effectively...
arxivpaperssafetyfine-tuning