[2512.17053] Knowledge Distillation with Structured Chain-of-Thought for Text-to-SQL
Summary
This article presents a novel Knowledge Distillation framework, Struct-SQL, which enhances Small Language Models for Text-to-SQL tasks by employing structured reasoning, achieving significant performance improvements over traditional methods.
Why It Matters
As enterprises increasingly rely on Text-to-SQL systems, the balance between cost, performance, and security becomes critical. This research addresses the limitations of existing models by proposing a structured approach to knowledge distillation, potentially leading to more reliable and efficient SQL generation.
Key Takeaways
- Structured reasoning improves the performance of Small Language Models in Text-to-SQL tasks.
- The proposed Struct-SQL framework shows an 8.1% improvement over unstructured methods.
- Reduction in syntactic errors is a key benefit of using structured knowledge distillation.
Computer Science > Computation and Language arXiv:2512.17053 (cs) [Submitted on 18 Dec 2025 (v1), last revised 25 Feb 2026 (this version, v2)] Title:Knowledge Distillation with Structured Chain-of-Thought for Text-to-SQL Authors:Khushboo Thaker, Yony Bresler View a PDF of the paper titled Knowledge Distillation with Structured Chain-of-Thought for Text-to-SQL, by Khushboo Thaker and 1 other authors View PDF HTML (experimental) Abstract:Deploying accurate Text-to-SQL systems at the enterprise level faces a difficult trilemma involving cost, security and performance. Current solutions force enterprises to choose between expensive, proprietary Large Language Models (LLMs) and low-performing Small Language Models (SLMs). Efforts to improve SLMs often rely on distilling reasoning from large LLMs using unstructured Chain-of-Thought (CoT) traces, a process that remains inherently ambiguous. Instead, we hypothesize that a formal, structured reasoning representation provides a clearer, more reliable teaching signal, as the Text-to-SQL task requires explicit and precise logical steps. To evaluate this hypothesis, we propose Struct-SQL, a novel Knowledge Distillation (KD) framework that trains an SLM to emulate a powerful large LLM. Consequently, we adopt a query execution plan as a formal blueprint to derive this structured reasoning. Our SLM, distilled with structured CoT, achieves an absolute improvement of 8.1% over an unstructured CoT distillation baseline. A detailed error anal...