Research2026-05-07
Reward Hacking Benchmark: Measuring Exploits in LLM Agents with Tool Use
Source: Arxiv CS.AI
arXiv:2605.02964v1 Announce Type: cross Abstract: Reinforcement learning (RL) trained language model agents with tool access are increasingly deployed in coding assistants, research tools, and autonomous systems. We introduce the Reward Hacking Benchmark (RHB), a suite of multi-step tasks requiring...
arxivpapersagentsbenchmark