[2603.03508] Raising Bars, Not Parameters: LilMoo Compact Language Model for Hindi
About this article
Abstract page for arXiv paper 2603.03508: Raising Bars, Not Parameters: LilMoo Compact Language Model for Hindi
Computer Science > Computation and Language arXiv:2603.03508 (cs) [Submitted on 3 Mar 2026] Title:Raising Bars, Not Parameters: LilMoo Compact Language Model for Hindi Authors:Shiza Fatimah, Aniket Sen, Sophia Falk, Florian Mai, Lucie Flek, Nicholas Kluge Corrêa View a PDF of the paper titled Raising Bars, Not Parameters: LilMoo Compact Language Model for Hindi, by Shiza Fatimah and Aniket Sen and Sophia Falk and Florian Mai and Lucie Flek and Nicholas Kluge Corr\^ea View PDF HTML (experimental) Abstract:The dominance of large multilingual foundation models has widened linguistic inequalities in Natural Language Processing (NLP), often leaving low-resource languages underrepresented. This paper introduces LilMoo, a 0.6-billion-parameter Hindi language model trained entirely from scratch to address this gap. Unlike prior Hindi models that rely on continual pretraining from opaque multilingual foundations, LilMoo is developed through a fully transparent and reproducible pipeline optimized for limited compute environments. We construct a high-quality Hindi corpus (GigaLekh) filtered through both heuristic and learned (LLM-as-a-judge) methods, complemented by bilingual augmentation with curated English data. Using this dataset, we explore various training recipes for small-scale language models. Across comprehensive evaluation suites, LilMoo consistently outperforms comparably sized multilingual baselines such as Qwen2.5-0.5B and Qwen3-0.6B, demonstrating that well-designed la...