BeClaude
Research2026-04-24

HWE-Bench: Benchmarking LLM Agents on Real-World Hardware Bug Repair Tasks

Source: Arxiv CS.AI

arXiv:2604.14709v2 Announce Type: replace Abstract: Existing benchmarks for hardware design primarily evaluate Large Language Models (LLMs) on isolated, component-level tasks such as generating HDL modules from specifications, leaving repository-scale evaluation unaddressed. We introduce HWE-Bench,...

arxivpapersagentsbenchmark