[2603.04716] SLO-Aware Compute Resource Allocation for Prefill-Decode Disaggregated LLM Inference

[2603.04716] SLO-Aware Compute Resource Allocation for Prefill-Decode Disaggregated LLM Inference

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.04716: SLO-Aware Compute Resource Allocation for Prefill-Decode Disaggregated LLM Inference

Computer Science > Distributed, Parallel, and Cluster Computing arXiv:2603.04716 (cs) [Submitted on 5 Mar 2026] Title:SLO-Aware Compute Resource Allocation for Prefill-Decode Disaggregated LLM Inference Authors:Luchang Li, Dongfang Li, Bozhao Gong, Yu Zhang View a PDF of the paper titled SLO-Aware Compute Resource Allocation for Prefill-Decode Disaggregated LLM Inference, by Luchang Li and 3 other authors View PDF Abstract:Prefill-Decode (P/D) disaggregation has emerged as a widely adopted optimization strategy for Large Language Model (LLM) inference. However, there currently exists no well-established methodology for determining the optimal number of P/D hardware resources, subject to constraints on total throughput, service level objectives (SLOs), and request characteristics - specifically input and output lengths. To address this gap, we propose a hybrid approach that combines theoretical modeling with empirical benchmarking. First, we present a theoretical model for calculating P/D resource counts, which is based on total throughput requirements, request input and output lengths, as well as prefill and decode throughput. Then, to obtain the actual prefill and decode throughput under SLO constraints, we model the prefill process using M/M/1 queuing theory, deriving the achieved prefill throughput from the benchmarked maximum prefill throughput and Time-To-First-Token (TTFT). For the decode phase, we determine the decode batch sizes that meet Time-Per-Output-Token (TPO...

Originally published on March 06, 2026. Curated by AI News.

Related Articles

Llms

An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I published a paper today on something I've been calling postural manipulation. The short version: ordi...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I've been documenting what I'm calling postural manipulation: a specific class of language that install...

Reddit - Machine Learning · 1 min ·
There are more AI health tools than ever—but how well do they work? | MIT Technology Review
Llms

There are more AI health tools than ever—but how well do they work? | MIT Technology Review

Earlier this month, Microsoft launched Copilot Health, a new space within its Copilot app where users will be able to connect their medic...

MIT Technology Review · 11 min ·
Llms

What does Gemini think of you?

I noticed that Gemini was referring back to a lot of queries I've made in the past and was using that knowledge to drive follow up prompt...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime