[2510.22389] Can Small and Reasoning Large Language Models Score Journal Articles for Research Quality and Do Averaging and Few-shot Help?
Summary
This article evaluates the ability of small and reasoning large language models (LLMs) to assess journal article quality, revealing that medium-sized LLMs perform comparably to larger models and that score averaging enhances accuracy.
Why It Matters
The findings provide significant insights into the capabilities of smaller LLMs in research quality evaluation, suggesting they can be effectively utilized in various settings, including offline environments. This could democratize access to research assessment tools, making them more accessible to institutions with limited resources.
Key Takeaways
- Medium-sized LLMs can effectively score journal articles for research quality.
- Score averaging from multiple queries improves evaluation accuracy.
- Smaller LLMs (>4b parameters) show substantial capabilities in research assessment.
- Reasoning models do not provide a significant advantage for this task.
- The study supports the credibility of LLMs in research evaluation.
Computer Science > Digital Libraries arXiv:2510.22389 (cs) [Submitted on 25 Oct 2025 (v1), last revised 17 Feb 2026 (this version, v2)] Title:Can Small and Reasoning Large Language Models Score Journal Articles for Research Quality and Do Averaging and Few-shot Help? Authors:Mike Thelwall, Ehsan Mohammadi View a PDF of the paper titled Can Small and Reasoning Large Language Models Score Journal Articles for Research Quality and Do Averaging and Few-shot Help?, by Mike Thelwall and 1 other authors View PDF Abstract:Previous research has shown that journal article quality ratings from the cloud based Large Language Model (LLM) families ChatGPT and Gemini and the medium sized open weights LLM Gemma3 27b correlate moderately with expert research quality scores. This article assesses whether other medium sized LLMs, smaller LLMs, and reasoning models have similar abilities. This is tested with Gemma3 variants, Llama4 Scout, Qwen3, Magistral Small and DeepSeek R1 on a dataset of 2,780 medical, health and life science papers in 6 fields, with two different gold standards, one novel. Few-shot and score averaging approaches are also evaluated. The results suggest that medium-sized LLMs have similar performance to ChatGPT 4o-mini and Gemini 2.0 Flash, but that 1b parameters may often, and 4b sometimes, be too few. Reasoning models did not have a clear advantage. Moreover, averaging scores from multiple identical queries seems to be a universally successful strategy, and there is wea...