[2602.10273] Power-SMC: Low-Latency Sequence-Level Power Sampling for Training-Free LLM Reasoning

[2602.10273] Power-SMC: Low-Latency Sequence-Level Power Sampling for Training-Free LLM Reasoning

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2602.10273: Power-SMC: Low-Latency Sequence-Level Power Sampling for Training-Free LLM Reasoning

Statistics > Machine Learning arXiv:2602.10273 (stat) [Submitted on 10 Feb 2026 (v1), last revised 23 Mar 2026 (this version, v2)] Title:Power-SMC: Low-Latency Sequence-Level Power Sampling for Training-Free LLM Reasoning Authors:Seyedarmin Azizi, Erfan Baghaei Potraghloo, Minoo Ahmadi, Souvik Kundu, Massoud Pedram View a PDF of the paper titled Power-SMC: Low-Latency Sequence-Level Power Sampling for Training-Free LLM Reasoning, by Seyedarmin Azizi and 4 other authors View PDF HTML (experimental) Abstract:Many recent reasoning gains in large language models can be explained as distribution sharpening: biasing generation toward high-likelihood trajectories already supported by the pretrained model, rather than modifying its weights. A natural formalization is the sequence-level power distribution $\pi_\alpha(y\mid x)\propto p_\theta(y\mid x)^\alpha$ ($\alpha>1$), which concentrates mass on whole sequences instead of adjusting token-level temperature. Prior work shows that Metropolis--Hastings (MH) sampling from this distribution recovers strong reasoning performance, but at order-of-magnitude inference slowdowns. We introduce Power-SMC, a training-free Sequential Monte Carlo scheme that targets the same objective while remaining close to standard decoding latency. Power-SMC advances a small particle set in parallel, corrects importance weights token-by-token, and resamples when necessary, all within a single GPU-friendly batched decode. We prove that temperature $\tau=1/\a...

Originally published on March 24, 2026. Curated by AI News.

Related Articles

Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
Llms

built an open source CLI that auto generates AI setup files for your projects just hit 150 stars

hey everyone, been working on this side project called ai-setup and just hit a milestone i wanted to share 150 github stars, 90 PRs merge...

Reddit - Artificial Intelligence · 1 min ·
Llms

built an open source tool that auto generates AI context files for any codebase, 150 stars in

one of the most tedious parts of working with AI coding tools is having to manually write context files every single time. CLAUDE.md, .cu...

Reddit - Artificial Intelligence · 1 min ·
Find out what’s new in the Gemini app in March's Gemini Drop.
Llms

Find out what’s new in the Gemini app in March's Gemini Drop.

Gemini Drops is our regular monthly update on how to get the most out of the Gemini app.

AI Tools & Products · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime