[2603.23292] LLM Olympiad: Why Model Evaluation Needs a Sealed Exam
About this article
Abstract page for arXiv paper 2603.23292: LLM Olympiad: Why Model Evaluation Needs a Sealed Exam
Computer Science > Artificial Intelligence arXiv:2603.23292 (cs) [Submitted on 24 Mar 2026] Title:LLM Olympiad: Why Model Evaluation Needs a Sealed Exam Authors:Jan Christian Blaise Cruz, Alham Fikri Aji View a PDF of the paper titled LLM Olympiad: Why Model Evaluation Needs a Sealed Exam, by Jan Christian Blaise Cruz and Alham Fikri Aji View PDF HTML (experimental) Abstract:Benchmarks and leaderboards are how NLP most often communicates progress, but in the LLM era they are increasingly easy to misread. Scores can reflect benchmark-chasing, hidden evaluation choices, or accidental exposure to test content -- not just broad capability. Closed benchmarks delay some of these issues, but reduce transparency and make it harder for the community to learn from results. We argue for a complementary practice: an Olympiad-style evaluation event where problems are sealed until evaluation, submissions are frozen in advance, and all entries run through one standardized harness. After scoring, the full task set and evaluation code are released so results can be reproduced and audited. This design aims to make strong performance harder to ``manufacture'' and easier to trust. Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL) Cite as: arXiv:2603.23292 [cs.AI] (or arXiv:2603.23292v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2603.23292 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Jan Christian ...