[2510.09424] The Speech-LLM Takes It All: A Truly Fully End-to-End Spoken Dialogue State Tracking Approach

[2510.09424] The Speech-LLM Takes It All: A Truly Fully End-to-End Spoken Dialogue State Tracking Approach

arXiv - Machine Learning 3 min read Research

Summary

This paper presents a comparative study of context management strategies for end-to-end Spoken Dialogue State Tracking using Speech-LLMs, highlighting the effectiveness of full spoken history input.

Why It Matters

Understanding effective context management in spoken dialogue systems is crucial for enhancing user interaction and improving AI communication capabilities. This research provides insights into optimizing dialogue state tracking, which can lead to more natural and efficient conversational agents.

Key Takeaways

  • Full spoken history input significantly improves dialogue tracking performance.
  • Attention-pooling-based compression offers a viable trade-off between context size and accuracy.
  • The study evaluates multiple context management strategies systematically.
  • Results indicate that effective context utilization is key to enhancing dialogue systems.
  • Findings are based on experiments conducted on the SpokenWOZ corpus.

Computer Science > Computation and Language arXiv:2510.09424 (cs) [Submitted on 10 Oct 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:The Speech-LLM Takes It All: A Truly Fully End-to-End Spoken Dialogue State Tracking Approach Authors:Nizar El Ghazal, Antoine Caubrière, Valentin Vielzeuf View a PDF of the paper titled The Speech-LLM Takes It All: A Truly Fully End-to-End Spoken Dialogue State Tracking Approach, by Nizar El Ghazal and Antoine Caubri\`ere and Valentin Vielzeuf View PDF HTML (experimental) Abstract:This paper presents a comparative study of context management strategies for end-to-end Spoken Dialog State Tracking using Speech-LLMs. We systematically evaluate traditional multimodal context (combining text history and spoken current turn), full spoken history, and compressed spoken history approaches. Our experiments on the SpokenWOZ corpus demonstrate that providing the full spoken conversation as input yields the highest performance among models of similar size, significantly surpassing prior methods. Furthermore, we show that attention-pooling-based compression of the spoken history offers a strong trade-off, maintaining competitive accuracy with reduced context size. Detailed analysis confirms that improvements stem from more effective context utilization. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Audio and Speech Processing (eess.AS) Cite as: arXiv:2510.09424 [cs.CL]   (...

Related Articles

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED
Llms

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED

Ads are rolling out across the US on ChatGPT’s free tier. I asked OpenAI's bot 500 questions to see what these ads were like and how they...

Wired - AI · 9 min ·
Llms

Abacus.Ai Claw LLM consumes an incredible amount of credit without any usage :(

Three days ago, I clicked the "Deploy OpenClaw In Seconds" button to get an overview of the new service, but I didn't build any automatio...

Reddit - Artificial Intelligence · 1 min ·
Google’s Gemini AI app debuts in Hong Kong
Llms

Google’s Gemini AI app debuts in Hong Kong

Tech giant’s chatbot service tops Apple’s app store chart in the city.

AI Tools & Products · 2 min ·
Google Launches Gemini Import Tools to Poach Users From Rival AI Apps
Llms

Google Launches Gemini Import Tools to Poach Users From Rival AI Apps

Anyone looking to switch their AI assistant will find it surprisingly easy, as it only takes a few steps to move from A to B. This is not...

AI Tools & Products · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime