[2603.03683] CONCUR: Benchmarking LLMs for Concurrent Code Generation
About this article
Abstract page for arXiv paper 2603.03683: CONCUR: Benchmarking LLMs for Concurrent Code Generation
Computer Science > Software Engineering arXiv:2603.03683 (cs) [Submitted on 4 Mar 2026] Title:CONCUR: Benchmarking LLMs for Concurrent Code Generation Authors:Jue Huang, Tarek Mahmud, Corina Pasareanu, Guowei Yang View a PDF of the paper titled CONCUR: Benchmarking LLMs for Concurrent Code Generation, by Jue Huang and Tarek Mahmud and Corina Pasareanu and Guowei Yang View PDF HTML (experimental) Abstract:Leveraging Large Language Models (LLMs) for code generation has increasingly emerged as a common practice in the domain of software engineering. Relevant benchmarks have been established to evaluate the code generation capabilities of LLMs. However, existing benchmarks focus primarily on sequential code, lacking the ability to effectively evaluate LLMs on concurrent code generation. Compared to sequential code, concurrent code exhibits greater complexity and possesses unique types of bugs, such as deadlocks and race conditions, that do not occur in sequential code. Therefore, a benchmark for evaluating sequential code generation cannot be useful for evaluating concurrent code generation with LLMs. To address this gap, we designed a benchmark CONCUR specifically aimed at evaluating the capability of LLMs to generate concurrent code. CONCUR consists of a base set of 43 concurrency problems derived from a standard concurrency textbook, together with 72 validated mutant variants, resulting in 115 total problems. The base problems serve as the semantic core of the benchmark, wh...