BigCodeArena: Judging code generations end to end with code executions
About this article
A Blog post by BigCode on Hugging Face
Back to Articles BigCodeArena: Judging code generations end to end with code executions Team Article Published October 7, 2025 Upvote 21 +15 Terry Yue Zhuo terryyz Follow bigcode Evaluating the quality of AI-generated code is notoriously difficult. While humans can easily spot whether a piece of code "looks right," determining if it actually works correctly, handles edge cases properly, and produces the intended result requires running and testing it. This is why today, we're thrilled to announce BigCodeArena -- the first human-in-the-loop platform for evaluating code generation models through execution. Inspired by LMArena for LLMs, we've built a platform that allows anyone to compare code generation models side-by-side, but with a crucial difference: you can actually run the code and see what it produces. Just submit a coding task, watch two different models generate solutions, execute both programs, and vote on which model produced better results. The outcomes are organized into a leaderboard that displays the community's highest-rated models. Motivation The field of code generation has long struggled with reliable evaluation methods. Traditional benchmarks like HumanEval test code against predefined test cases, but these represent only a tiny fraction of real-world programming tasks. Human evaluation platforms exist for general chatbots, but they fall short for code: reading raw source code and mentally simulating its execution is cognitively demanding and error-prone,...