[2509.22134] Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding

[2509.22134] Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2509.22134: Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding

Computer Science > Computation and Language arXiv:2509.22134 (cs) [Submitted on 26 Sep 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding Authors:Shijing Hu, Jingyang Li, Zhihui Lu, Pan Zhou View a PDF of the paper titled Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding, by Shijing Hu and 2 other authors View PDF HTML (experimental) Abstract:Speculative decoding accelerates large language model (LLM) inference by letting a lightweight draft model propose multiple tokens that the target model verifies in parallel. Yet existing training objectives optimize only a single greedy draft path, while decoding follows a tree policy that re-ranks and verifies multiple branches. This draft policy misalignment limits achievable speedups. We introduce Group Tree Optimization (GTO), which aligns training with the decoding-time tree policy through two components: (i) Draft Tree Reward, a sampling-free objective equal to the expected acceptance length of the draft tree under the target model, directly measuring decoding performance; (ii) Group-based Draft Policy Training, a stable optimization scheme that contrasts trees from the current and a frozen reference draft model, forming debiased group-standardized advantages and applying a PPO-style surrogate along the longest accepted sequence for robust updates. We further prove that increasing our Draft Tree Rew...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

TRACER: Learn-to-Defer for LLM Classification with Formal Teacher-Agreement Guarantees

I'm releasing TRACER (Trace-Based Adaptive Cost-Efficient Routing), a library for learning cost-efficient routing policies from LLM trace...

Reddit - Machine Learning · 1 min ·
Mistral AI raises $830M in debt to set up a data center near Paris | TechCrunch
Llms

Mistral AI raises $830M in debt to set up a data center near Paris | TechCrunch

Mistral aims to start operating the data center by the second quarter of 2026.

TechCrunch - AI · 4 min ·
Llms

The Rationing: AI companies are using the "subsidize, addict, extract" playbook — and developers are the product

Anthropic just ran the classic platform playbook on developers: offer generous limits to build dependency, then tighten the screws once t...

Reddit - Artificial Intelligence · 1 min ·
Llms

CLI for Google AI Search (gai.google) — run AI-powered code/tech searches headlessly from your terminal

Google AI (gai.google) gives Gemini-powered answers for technical queries — think AI-enhanced search with code understanding. I built a C...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime