Research2026-04-28
Learning to Conceal Risk: Controllable Multi-turn Red Teaming for LLMs in the Financial Domain
Source: Arxiv CS.AI
arXiv:2509.10546v2 Announce Type: replace-cross Abstract: Large Language Models (LLMs) are increasingly deployed in finance, where unsafe behavior can lead to serious regulatory risks. However, most red-teaming research focuses on overtly harmful content and overlooks attacks that appear legitimate...
arxivpapers