[2603.04429] What Is Missing: Interpretable Ratings for Large Language Model Outputs
About this article
Abstract page for arXiv paper 2603.04429: What Is Missing: Interpretable Ratings for Large Language Model Outputs
Computer Science > Computation and Language arXiv:2603.04429 (cs) [Submitted on 17 Feb 2026] Title:What Is Missing: Interpretable Ratings for Large Language Model Outputs Authors:Nicholas Stranges, Yimin Yang View a PDF of the paper titled What Is Missing: Interpretable Ratings for Large Language Model Outputs, by Nicholas Stranges and Yimin Yang View PDF Abstract:Current Large Language Model (LLM) preference learning methods such as Proximal Policy Optimization and Direct Preference Optimization learn from direct rankings or numerical ratings of model outputs, these rankings are subjective, and a single numerical rating chosen directly by a judge is a poor proxy for the quality of natural language, we introduce the What Is Missing (WIM) rating system to produce rankings from natural-language feedback, WIM integrates into existing training pipelines, can be combined with other rating techniques, and can be used as input to any preference learning method without changing the learning algorithm, to compute a WIM rating, a human or LLM judge writes feedback describing what the model output is missing, we embed the output and the feedback with a sentence embedding model and compute the cosine similarity between the resulting vectors, we empirically observe that, compared to discrete numerical ratings, WIM yields fewer ties and larger rating deltas, which improves the availability of a learning signal in pairwise preference data, we use interpretable in the following limited se...