[2512.09069] KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification

[2512.09069] KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification

arXiv - Machine Learning 4 min read Article

Summary

The paper presents KD-OCT, a novel knowledge distillation framework that enhances the efficiency of deep learning models for classifying retinal OCT images, achieving high diagnostic performance with reduced computational demands.

Why It Matters

As age-related macular degeneration and related conditions are major causes of vision loss, efficient diagnostic tools are crucial. KD-OCT enables real-time deployment of advanced models in clinical settings, potentially improving patient outcomes and accessibility to eye care.

Key Takeaways

  • KD-OCT compresses a high-performance teacher model into a lightweight student model.
  • The method balances soft knowledge transfer and hard supervision for effective learning.
  • KD-OCT achieves near-teacher performance while significantly reducing model size and inference time.
  • The framework facilitates edge deployment for efficient AMD screening.
  • Experimental results show KD-OCT outperforms existing OCT classifiers in efficiency-accuracy balance.

Computer Science > Computer Vision and Pattern Recognition arXiv:2512.09069 (cs) [Submitted on 9 Dec 2025 (v1), last revised 25 Feb 2026 (this version, v2)] Title:KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification Authors:Erfan Nourbakhsh, Nasrin Sanjari, Ali Nourbakhsh View a PDF of the paper titled KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification, by Erfan Nourbakhsh and 2 other authors View PDF HTML (experimental) Abstract:Age-related macular degeneration (AMD) and choroidal neovascularization (CNV)-related conditions are leading causes of vision loss worldwide, with optical coherence tomography (OCT) serving as a cornerstone for early detection and management. However, deploying state-of-the-art deep learning models like ConvNeXtV2-Large in clinical settings is hindered by their computational demands. Therefore, it is desirable to develop efficient models that maintain high diagnostic performance while enabling real-time deployment. In this study, a novel knowledge distillation framework, termed KD-OCT, is proposed to compress a high-performance ConvNeXtV2-Large teacher model, enhanced with advanced augmentations, stochastic weight averaging, and focal loss, into a lightweight EfficientNet-B2 student for classifying normal, drusen, and CNV cases. KD-OCT employs real-time distillation with a combined loss balancing soft teacher knowledge transfer and hard ground-truth supervision. The effectivenes...

Related Articles

Llms

[P] I built an autonomous ML agent that runs experiments on tabular data indefinitely - inspired by Karpathy's AutoResearch

Inspired by Andrej Karpathy's AutoResearch, I built a system where Claude Code acts as an autonomous ML researcher on tabular binary clas...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Data curation and targeted replacement as a pre-training alignment and controllability method

Hi, r/MachineLearning: has much research been done in large-scale training scenarios where undesirable data has been replaced before trai...

Reddit - Machine Learning · 1 min ·
Llms

[R] BraiNN: An Experimental Neural Architecture with Working Memory, Relational Reasoning, and Adaptive Learning

BraiNN An Experimental Neural Architecture with Working Memory, Relational Reasoning, and Adaptive Learning BraiNN is a compact research‑...

Reddit - Machine Learning · 1 min ·
Machine Learning

[HIRING]Remote AI Training Jobs -Up to $1K/Week| Collaborators Wanted.USA

submitted by /u/nortonakenga [link] [comments]

Reddit - ML Jobs · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime