BeClaude
Research2026-05-07

When Safety Geometry Collapses: Fine-Tuning Vulnerabilities in Agentic Guard Models

Source: Arxiv CS.AI

arXiv:2605.02914v1 Announce Type: cross Abstract: A guard model fine-tuned on entirely benign data can lose all safety alignment -- not through adversarial manipulation, but through standard domain specialization. We demonstrate this failure across three purpose-built safety classifiers --...

arxivpapersagentssafetyfine-tuning