[2602.18583] Luna-2: Scalable Single-Token Evaluation with Small Language Models
Summary
Luna-2 introduces a scalable architecture for single-token evaluation using small language models, enhancing accuracy and reducing costs and latency compared to traditional methods.
Why It Matters
The development of Luna-2 is significant as it addresses the limitations of existing evaluation methods in AI, particularly in terms of cost and speed. By enabling efficient and accurate evaluations, it can enhance the deployment of AI systems while ensuring safety and performance, which is crucial for industries relying on AI technologies.
Key Takeaways
- Luna-2 achieves accuracy comparable to state-of-the-art LLM evaluators.
- It reduces evaluation costs by over 80x and latency by over 20x.
- The architecture allows for concurrent processing of hundreds of metrics on a single GPU.
- Luna-2 is currently protecting over 100 million AI sessions and processing 100 billion tokens monthly.
- The model is designed to be privacy-preserving and efficient for real-world applications.
Computer Science > Computation and Language arXiv:2602.18583 (cs) [Submitted on 20 Feb 2026] Title:Luna-2: Scalable Single-Token Evaluation with Small Language Models Authors:Vatsal Goel, Rishon Dsouza, Nikhil Ega, Amey Ramesh Rambatla, Rob Friel, Shuai Shao, Yash Sheth View a PDF of the paper titled Luna-2: Scalable Single-Token Evaluation with Small Language Models, by Vatsal Goel and 6 other authors View PDF Abstract:Real-time guardrails require evaluation that is accurate, cheap, and fast - yet today's default, LLM-as-a-judge (LLMAJ), is slow, expensive, and operationally non-deterministic due to multi-token generation. We present Luna-2, a novel architecture that leverages decoder-only small language models (SLMs) into a deterministic evaluation model to reliably compute complex task-specific LLMAJ metrics (e.g. toxicity, hallucination, tool selection quality, etc.) at an accuracy at par or higher than LLMAJ using frontier LLMs while drastically reducing the cost and latency of computation. Each metric is implemented as a lightweight LoRA/PEFT head on top of a shared SLM backbone, enabling hundreds of specialized metrics to run concurrently on a single GPU, deployable locally next to AI systems in a privacy-preserving and latency optimizing manner. Across content safety and hallucination benchmarks, Luna-2 matches the accuracy of state-of-the-art LLM-based evaluators while reducing inference cost by over 80x and latency by over 20x. In this paper, we outline the model...