[2602.22351] Decoder-based Sense Knowledge Distillation

[2602.22351] Decoder-based Sense Knowledge Distillation

arXiv - AI 3 min read Article

Summary

This paper introduces Decoder-based Sense Knowledge Distillation (DSKD), a novel framework that enhances knowledge distillation in decoder-based large language models (LLMs) by integrating lexical resources without requiring dictionary lookups during inference.

Why It Matters

The integration of structured lexical knowledge into generative models like LLMs is crucial for improving their semantic understanding and performance. This paper addresses existing gaps in knowledge distillation methods for decoders, potentially leading to more effective AI applications in natural language processing.

Key Takeaways

  • DSKD framework improves knowledge distillation for decoder models.
  • Incorporates lexical resources without inference-time dictionary lookups.
  • Demonstrates significant performance enhancements across various benchmarks.
  • Addresses limitations of previous methods focused on encoder models.
  • Supports the development of more semantically aware generative models.

Computer Science > Computation and Language arXiv:2602.22351 (cs) [Submitted on 25 Feb 2026] Title:Decoder-based Sense Knowledge Distillation Authors:Qitong Wang, Mohammed J. Zaki, Georgios Kollias, Vasileios Kalantzis View a PDF of the paper titled Decoder-based Sense Knowledge Distillation, by Qitong Wang and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) learn contextual embeddings that capture rich semantic information, yet they often overlook structured lexical knowledge such as word senses and relationships. Prior work has shown that incorporating sense dictionaries can improve knowledge distillation for encoder models, but their application to decoder as generative models remains challenging. In this paper, we introduce Decoder-based Sense Knowledge Distillation (DSKD), a framework that integrates lexical resources into the training of decoder-style LLMs without requiring dictionary lookup at inference time. Extensive experiments on diverse benchmarks demonstrate that DSKD significantly enhances knowledge distillation performance for decoders, enabling generative models to inherit structured semantics while maintaining efficient training. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.22351 [cs.CL]   (or arXiv:2602.22351v1 [cs.CL] for this version)   https://doi.org/10.48550/arXiv.2602.22351 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history ...

Related Articles

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED
Llms

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED

Ads are rolling out across the US on ChatGPT’s free tier. I asked OpenAI's bot 500 questions to see what these ads were like and how they...

Wired - AI · 9 min ·
Llms

Abacus.Ai Claw LLM consumes an incredible amount of credit without any usage :(

Three days ago, I clicked the "Deploy OpenClaw In Seconds" button to get an overview of the new service, but I didn't build any automatio...

Reddit - Artificial Intelligence · 1 min ·
Google’s Gemini AI app debuts in Hong Kong
Llms

Google’s Gemini AI app debuts in Hong Kong

Tech giant’s chatbot service tops Apple’s app store chart in the city.

AI Tools & Products · 2 min ·
Google Launches Gemini Import Tools to Poach Users From Rival AI Apps
Llms

Google Launches Gemini Import Tools to Poach Users From Rival AI Apps

Anyone looking to switch their AI assistant will find it surprisingly easy, as it only takes a few steps to move from A to B. This is not...

AI Tools & Products · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime