[2510.16609] Prior Knowledge Makes It Possible: From Sublinear Graph Algorithms to LLM Test-Time Methods
About this article
Abstract page for arXiv paper 2510.16609: Prior Knowledge Makes It Possible: From Sublinear Graph Algorithms to LLM Test-Time Methods
Computer Science > Machine Learning arXiv:2510.16609 (cs) [Submitted on 18 Oct 2025 (v1), last revised 2 Apr 2026 (this version, v2)] Title:Prior Knowledge Makes It Possible: From Sublinear Graph Algorithms to LLM Test-Time Methods Authors:Avrim Blum, Daniel Hsu, Cyrus Rashtchian, Donya Saless View a PDF of the paper titled Prior Knowledge Makes It Possible: From Sublinear Graph Algorithms to LLM Test-Time Methods, by Avrim Blum and 3 other authors View PDF HTML (experimental) Abstract:Test-time augmentation, such as Retrieval-Augmented Generation (RAG) or tool use, critically depends on an interplay between a model's parametric knowledge and externally retrieved information. However, the theoretical underpinnings of this relationship remain poorly understood. Specifically, it is not clear how much pre-training knowledge is required to answer queries with a small number of augmentation steps, which is a desirable property in practice. To address this question, we formulate multi-step reasoning as an $s$-$t$ connectivity problem on a knowledge graph. We represent a model's pre-training parametric knowledge as a partial, potentially noisy subgraph. We view augmentation as querying an oracle for true edges that augment the model's knowledge. Then, we characterize the necessary and sufficient number of augmentation steps for the model to generate an accurate answer given partial prior knowledge. One key result shows a phase transition: if the prior knowledge graph over $n$ ver...