Accelerating Qwen3-8B Agent on Intel® Core™ Ultra with Depth-Pruned Draft Models
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Accelerating Qwen3-8B Agent on Intel® Core™ Ultra with Depth-Pruned Draft Models Published September 29, 2025 Update on GitHub Upvote 23 +17 Igor Margulis imargulis Follow Intel Ofir Zafrir ofirzaf Follow Intel Shira Guskin sguskin Follow Intel Guy Boudoukh guybd Follow Intel Pedro Cuenca pcuenq Follow TL;DR: Qwen3-8B is one of the most exciting recent releases—a model with native agentic capabilities, making it a natural fit for the AIPC. With OpenVINO.GenAI, we’ve been able to accelerate generation by ~1.3× using speculative decoding with a lightweight Qwen3-0.6B draft. By using speculative decoding and applying a simple pruning process to the draft, we pushed the speedup even further to ~1.4× We wrapped this up by showing how these improvements can be used to run a fast, local AI Agent with 🤗 smolagents Qwen3 Qwen3-8B is part of the latest Qwen family, trained with explicit agentic behaviors. It supports tool invocation, multi-step reasoning, and long-context handling capabilities, that make it well-suited for complex agent workflows. When integrated with frameworks like Hugging Face 🤗smolagents, QwenAgent, or AutoGen, it enables a wide range of agentic applications built around tool use and reasoning. Unlike single-turn chatbots, agentic applications rely on reasoning models that produce “thinking aloud” traces, intermediate steps that expand token usage, making inference speed critical to responsiveness. The combination of optimized inference and buil...