[2603.04028] A Multi-Dimensional Quality Scoring Framework for Decentralized LLM Inference with Proof of Quality
About this article
Abstract page for arXiv paper 2603.04028: A Multi-Dimensional Quality Scoring Framework for Decentralized LLM Inference with Proof of Quality
Computer Science > Machine Learning arXiv:2603.04028 (cs) [Submitted on 4 Mar 2026] Title:A Multi-Dimensional Quality Scoring Framework for Decentralized LLM Inference with Proof of Quality Authors:Arther Tian, Alex Ding, Frank Chen, Simon Wu, Aaron Chan View a PDF of the paper titled A Multi-Dimensional Quality Scoring Framework for Decentralized LLM Inference with Proof of Quality, by Arther Tian and 4 other authors View PDF HTML (experimental) Abstract:Decentralized large language model (LLM) inference networks can pool heterogeneous compute to scale serving, but they require lightweight and incentive-compatible mechanisms to assess output quality. Prior work introduced cost-aware Proof of Quality (PoQ) and adaptive robust PoQ to allocate rewards under evaluator heterogeneity and adversarial behavior. In this paper, we focus on the quality signal itself and propose a multi-dimensional quality scoring framework that decomposes output quality into modular dimensions, including model and cost priors, structure quality, semantic quality, query-output alignment, and agreement/uncertainty. Using logged outputs from QA and summarization tasks, we systematically audit dimension reliability and show that seemingly reasonable dimensions can be task-dependent and even negatively correlated with reference quality without calibration. While the default composite underperforms a strong single semantic evaluator, ablations reveal that removing unreliable dimensions and re-normalizing ...