BeClaude
Research2026-05-07

RouteHijack: Routing-Aware Attack on Mixture-of-Experts LLMs

Source: Arxiv CS.AI

arXiv:2605.02946v1 Announce Type: cross Abstract: Safety alignment is critical for the responsible deployment of large language models (LLMs). As Mixture-of-Experts (MoE) architectures are increasingly adopted to scale model capacity, understanding their safety robustness becomes essential....

arxivpapers