[2602.22935] A Holistic Framework for Robust Bangla ASR and Speaker Diarization with Optimized VAD and CTC Alignment

[2602.22935] A Holistic Framework for Robust Bangla ASR and Speaker Diarization with Optimized VAD and CTC Alignment

arXiv - AI 3 min read Article

Summary

This paper presents a robust framework for Bangla Automatic Speech Recognition (ASR) and Speaker Diarization, addressing challenges in processing long-form audio through optimized Voice Activity Detection (VAD) and Connectionist Temporal Classification (CTC) alignment.

Why It Matters

Bangla is a widely spoken language yet remains underrepresented in NLP applications. This research enhances ASR and speaker diarization for Bangla, providing scalable solutions for real-world applications, particularly in multi-speaker environments, thus contributing to the advancement of low-resource language technologies.

Key Takeaways

  • Introduces a holistic framework for Bangla ASR and speaker diarization.
  • Optimizes VAD and CTC alignment for improved transcription accuracy.
  • Addresses challenges in processing long-form audio exceeding 3060 seconds.
  • Utilizes data augmentation and noise removal techniques for better performance.
  • Provides scalable solutions for real-world applications in complex environments.

Computer Science > Sound arXiv:2602.22935 (cs) [Submitted on 26 Feb 2026] Title:A Holistic Framework for Robust Bangla ASR and Speaker Diarization with Optimized VAD and CTC Alignment Authors:Zarif Ishmam, Zarif Mahir, Shafnan Wasif, Md. Ishtiak Moin View a PDF of the paper titled A Holistic Framework for Robust Bangla ASR and Speaker Diarization with Optimized VAD and CTC Alignment, by Zarif Ishmam and 3 other authors View PDF Abstract:Despite being one of the most widely spoken languages globally, Bangla remains a low-resource language in the field of Natural Language Processing (NLP). Mainstream Automatic Speech Recognition (ASR) and Speaker Diarization systems for Bangla struggles when processing longform audio exceeding 3060 seconds. This paper presents a robust framework specifically engineered for extended Bangla content by leveraging preexisting models enhanced with novel optimization pipelines for the DL Sprint 4.0 contest. Our approach utilizes Voice Activity Detection (VAD) optimization and Connectionist Temporal Classification (CTC) segmentation via forced word alignment to maintain temporal accuracy and transcription integrity over long durations. Additionally, we employed several finetuning techniques and preprocessed the data using augmentation techniques and noise removal. By bridging the performance gap in complex, multi-speaker environments, this work provides a scalable solution for real-world, longform Bangla speech applications. Comments: Subjects: Sou...

Related Articles

Machine Learning

I tried building a memory-first AI… and ended up discovering smaller models can beat larger ones

Dataset Model Acc F1 Δ vs Log Δ vs Static Avg Params Peak Params Steps Infer ms Size Banking77-20 Logistic TF-IDF 92.37% 0.9230 +0.00pp +...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] Howcome Muon is only being used for Transformers?

Muon has quickly been adopted in LLM training, yet we don't see it being talked about in other contexts. Searches for Muon on ConvNets tu...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] Run Karpathy's Autoresearch for $0.44 instead of $24 — Open-source parallel evolution pipeline on SageMaker Spot

TL;DR: I built an open-source pipeline that runs Karpathy's autoresearch on SageMaker Spot instances — 25 autonomous ML experiments for $...

Reddit - Machine Learning · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime