[2603.20854] SozKZ: Training Efficient Small Language Models for Kazakh from Scratch
About this article
Abstract page for arXiv paper 2603.20854: SozKZ: Training Efficient Small Language Models for Kazakh from Scratch
Computer Science > Computation and Language arXiv:2603.20854 (cs) [Submitted on 21 Mar 2026] Title:SozKZ: Training Efficient Small Language Models for Kazakh from Scratch Authors:Saken Tukenov View a PDF of the paper titled SozKZ: Training Efficient Small Language Models for Kazakh from Scratch, by Saken Tukenov View PDF HTML (experimental) Abstract:Kazakh, a Turkic language spoken by over 22 million people, remains underserved by existing multilingual language models, which allocate minimal capacity to low-resource languages and employ tokenizers ill-suited to agglutinative morphology. We present SozKZ, a family of Llama-architecture language models (50M-600M parameters) trained entirely from scratch on 9 billion tokens of Kazakh text with a dedicated 50K BPE tokenizer. We evaluate all models on three Kazakh benchmarks -- multiple-choice cultural QA, reading comprehension (Belebele), and topic classification (SIB-200) -- alongside five multilingual baselines ranging from 500M to 3B parameters. Our 600M model achieves 30.3% accuracy on Kazakh cultural QA, approaching the 32.0% of Llama-3.2-1B (2x larger), and 25.5% on SIB-200 topic classification, surpassing all evaluated multilingual models up to 2B parameters. We observe consistent scaling from 50M to 600M, with MC QA accuracy rising from 22.8% to 30.3%, suggesting that further scaling remains beneficial. These results demonstrate that small, dedicated models trained from scratch with a language-appropriate tokenizer off...