BeClaude
Research2026-05-06

Mitigating Misalignment Contagion by Steering with Implicit Traits

Source: Arxiv CS.AI

arXiv:2605.02751v1 Announce Type: new Abstract: Language models (LMs) are increasingly used in high-stakes, multi-agent settings, where following instructions and maintaining value alignment are critical. Most alignment research focuses on interactions between a single LM and a single user, failing...

arxivpapers