[2603.00532] DenoiseFlow: Uncertainty-Aware Denoising for Reliable LLM Agentic Workflows
About this article
Abstract page for arXiv paper 2603.00532: DenoiseFlow: Uncertainty-Aware Denoising for Reliable LLM Agentic Workflows
Computer Science > Artificial Intelligence arXiv:2603.00532 (cs) [Submitted on 28 Feb 2026] Title:DenoiseFlow: Uncertainty-Aware Denoising for Reliable LLM Agentic Workflows Authors:Yandong Yan, Junwei Peng, Shijie Li, Chenxi Li, Yifei Shang, Can Deng, Ruiting Dai, Yongqiang Zhao, Jiaqi Zhu, Yu Huang View a PDF of the paper titled DenoiseFlow: Uncertainty-Aware Denoising for Reliable LLM Agentic Workflows, by Yandong Yan and 9 other authors View PDF HTML (experimental) Abstract:Autonomous agents are increasingly entrusted with complex, long-horizon tasks, ranging from mathematical reasoning to software generation. While agentic workflows facilitate these tasks by decomposing them into multi-step reasoning chains, reliability degrades significantly as the sequence lengthens. Specifically, minor interpretation errors in natural-language instructions tend to compound silently across steps. We term this failure mode accumulated semantic ambiguity. Existing approaches to mitigate this often lack runtime adaptivity, relying instead on static exploration budgets, reactive error recovery, or single-path execution that ignores uncertainty entirely. We formalize the multi-step reasoning process as a Noisy MDP and propose DenoiseFlow, a closed-loop framework that performs progressive denoising through three coordinated stages: (1)Sensing estimates per-step semantic uncertainty; (2)Regulating adaptively allocates computation by routing between fast single-path execution and parallel e...