[2602.03006] Distilling LLM Reasoning into Graph of Concept Predictors
About this article
Abstract page for arXiv paper 2602.03006: Distilling LLM Reasoning into Graph of Concept Predictors
Computer Science > Artificial Intelligence arXiv:2602.03006 (cs) [Submitted on 3 Feb 2026 (v1), last revised 30 Mar 2026 (this version, v2)] Title:Distilling LLM Reasoning into Graph of Concept Predictors Authors:Ziyang Yu, Liang Zhao View a PDF of the paper titled Distilling LLM Reasoning into Graph of Concept Predictors, by Ziyang Yu and 1 other authors View PDF HTML (experimental) Abstract:Deploying Large Language Models (LLMs) for discriminative workloads is often limited by inference latency, compute, and API costs at scale. Active distillation reduces these costs by querying an LLM oracle to train compact discriminative students, but most pipelines distill only final labels, discarding intermediate reasoning signals and offering limited diagnostics of what reasoning is missing and where errors arise. We propose Graph of Concept Predictors (GCP), a reasoning-aware active distillation framework that externalizes the teacher's decision process as a directed acyclic graph and mirrors it with modular concept predictors in the student. GCP enhances sample efficiency through a graph-aware acquisition strategy that targets uncertainty and disagreement at critical reasoning nodes. Additionally, it improves training stability and efficiency by performing targeted sub-module retraining, which attributes downstream loss to specific concept predictors and updates only the most influential modules. Experiments on eight NLP classification benchmarks demonstrate that GCP enhances pe...