Research2026-05-12
Benchmarking Safety Risks of Knowledge-Intensive Reasoning under Malicious Knowledge Editing
Source: Arxiv CS.AI
arXiv:2605.10146v1 Announce Type: new Abstract: Large language models (LLMs) increasingly rely on knowledge editing to support knowledge-intensive reasoning, but this flexibility also introduces critical safety risks: adversaries can inject malicious or misleading knowledge that corrupts downstream...
arxivpapersreasoningbenchmarksafety