[2510.16609] Prior Knowledge Makes It Possible: From Sublinear Graph Algorithms to LLM Test-Time Methods

[2510.16609] Prior Knowledge Makes It Possible: From Sublinear Graph Algorithms to LLM Test-Time Methods

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2510.16609: Prior Knowledge Makes It Possible: From Sublinear Graph Algorithms to LLM Test-Time Methods

Computer Science > Machine Learning arXiv:2510.16609 (cs) [Submitted on 18 Oct 2025 (v1), last revised 2 Apr 2026 (this version, v2)] Title:Prior Knowledge Makes It Possible: From Sublinear Graph Algorithms to LLM Test-Time Methods Authors:Avrim Blum, Daniel Hsu, Cyrus Rashtchian, Donya Saless View a PDF of the paper titled Prior Knowledge Makes It Possible: From Sublinear Graph Algorithms to LLM Test-Time Methods, by Avrim Blum and 3 other authors View PDF HTML (experimental) Abstract:Test-time augmentation, such as Retrieval-Augmented Generation (RAG) or tool use, critically depends on an interplay between a model's parametric knowledge and externally retrieved information. However, the theoretical underpinnings of this relationship remain poorly understood. Specifically, it is not clear how much pre-training knowledge is required to answer queries with a small number of augmentation steps, which is a desirable property in practice. To address this question, we formulate multi-step reasoning as an $s$-$t$ connectivity problem on a knowledge graph. We represent a model's pre-training parametric knowledge as a partial, potentially noisy subgraph. We view augmentation as querying an oracle for true edges that augment the model's knowledge. Then, we characterize the necessary and sufficient number of augmentation steps for the model to generate an accurate answer given partial prior knowledge. One key result shows a phase transition: if the prior knowledge graph over $n$ ver...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

What is AI, how do apps like ChatGPT work and why are there concerns?
Llms

What is AI, how do apps like ChatGPT work and why are there concerns?

AI is transforming modern life, but some critics worry about its potential misuse and environmental impact.

AI News - General · 7 min ·
[2603.29957] Think Anywhere in Code Generation
Llms

[2603.29957] Think Anywhere in Code Generation

Abstract page for arXiv paper 2603.29957: Think Anywhere in Code Generation

arXiv - Machine Learning · 3 min ·
[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning
Llms

[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning

Abstract page for arXiv paper 2603.16880: NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectr...

arXiv - Machine Learning · 4 min ·
[2512.21106] Semantic Refinement with LLMs for Graph Representations
Llms

[2512.21106] Semantic Refinement with LLMs for Graph Representations

Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime