BeClaude
Research2026-05-12

SmartEval: A Benchmark for Evaluating LLM-Generated Smart Contracts from Natural Language Specifications

Source: Arxiv CS.AI

arXiv:2605.09610v1 Announce Type: cross Abstract: We introduce SmartEval, a benchmark for systematically evaluating the quality of Solidity smart contracts generated by large language models (LLMs) from natural language specifications. SmartEval provides a corpus of 9,000 generated contracts paired...

arxivpapersbenchmark