[2510.16234] ScholarEval: Research Idea Evaluation Grounded in Literature
About this article
Abstract page for arXiv paper 2510.16234: ScholarEval: Research Idea Evaluation Grounded in Literature
Computer Science > Artificial Intelligence arXiv:2510.16234 (cs) [Submitted on 17 Oct 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:ScholarEval: Research Idea Evaluation Grounded in Literature Authors:Hanane Nour Moussa, Patrick Queiroz Da Silva, Daniel Adu-Ampratwum, Alyson East, Zitong Lu, Nikki Puccetti, Mingyi Xue, Huan Sun, Bodhisattwa Prasad Majumder, Sachin Kumar View a PDF of the paper titled ScholarEval: Research Idea Evaluation Grounded in Literature, by Hanane Nour Moussa and 9 other authors View PDF HTML (experimental) Abstract:As AI tools become increasingly common for research ideation, robust evaluation is critical to ensure the validity and usefulness of generated ideas. We introduce ScholarEval, a retrieval augmented evaluation framework that assesses research ideas based on two fundamental criteria: soundness - the empirical validity of proposed methods based on existing literature, and contribution - the degree of advancement made by the idea across different dimensions relative to prior research. To evaluate ScholarEval, we introduce ScholarIdeas, the first expert-annotated dataset of multi-domain research ideas and reviews, comprised of 117 ideas across four disciplines: artificial intelligence, neuroscience, biochemistry, and ecology. Our evaluation shows that ScholarEval achieves significantly higher coverage of points mentioned in the human expert annotated rubrics in ScholarIdeas compared to all baselines. Furthermore, ScholarEval i...