[2603.28028] Efficient Domain Adaptation for Text Line Recognition via Decoupled Language Models

[2603.28028] Efficient Domain Adaptation for Text Line Recognition via Decoupled Language Models

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2603.28028: Efficient Domain Adaptation for Text Line Recognition via Decoupled Language Models

Computer Science > Computer Vision and Pattern Recognition arXiv:2603.28028 (cs) [Submitted on 30 Mar 2026] Title:Efficient Domain Adaptation for Text Line Recognition via Decoupled Language Models Authors:Arundhathi Dev, Justin Zhan View a PDF of the paper titled Efficient Domain Adaptation for Text Line Recognition via Decoupled Language Models, by Arundhathi Dev and 1 other authors View PDF HTML (experimental) Abstract:Optical character recognition remains critical infrastructure for document digitization, yet state-of-the-art performance is often restricted to well-resourced institutions by prohibitive computational barriers. End-to-end transformer architectures achieve strong accuracy but demand hundreds of GPU hours for domain adaptation, limiting accessibility for practitioners and digital humanities scholars. We present a modular detection-and-correction framework that achieves near-SOTA accuracy with single-GPU training. Our approach decouples lightweight visual character detection (domain-agnostic) from domain-specific linguistic correction using pretrained sequence models including T5, ByT5, and BART. By training the correctors entirely on synthetic noise, we enable annotation-free domain adaptation without requiring labeled target images. Evaluating across modern clean handwriting, cursive script, and historical documents, we identify a critical "Pareto frontier" in architecture selection: T5-Base excels on modern text with standard vocabulary, whereas ByT5-Bas...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

[2604.01473] SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits
Llms

[2604.01473] SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits

Abstract page for arXiv paper 2604.01473: SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits

arXiv - AI · 4 min ·
[2603.23682] Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for Humans and Chatbots
Llms

[2603.23682] Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for Humans and Chatbots

Abstract page for arXiv paper 2603.23682: Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for ...

arXiv - AI · 4 min ·
[2601.07422] Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations
Llms

[2601.07422] Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations

Abstract page for arXiv paper 2601.07422: Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations

arXiv - AI · 4 min ·
[2603.08486] Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images
Llms

[2603.08486] Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images

Abstract page for arXiv paper 2603.08486: Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images

arXiv - AI · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime