[2512.01865] Cross-Lingual Interleaving for Speech Language Models
Summary
The paper presents a novel cross-lingual interleaving method for Speech Language Models (SLMs), enhancing multilingual understanding and conversation capabilities without textual supervision.
Why It Matters
This research addresses the challenge of limited resources for non-English languages in NLP, promoting inclusivity in technology. By providing new datasets and benchmarks, it enables further advancements in multilingual SLMs, which are crucial for global communication.
Key Takeaways
- Introduces a cross-lingual interleaving method to improve SLMs.
- Enhances semantic accuracy and cross-lingual capabilities of models.
- Releases new datasets and benchmarks for broader accessibility.
- Supports reproducibility with open-source resources.
- Addresses the scarcity of spoken evaluation benchmarks in NLP.
Computer Science > Computation and Language arXiv:2512.01865 (cs) [Submitted on 1 Dec 2025 (v1), last revised 20 Feb 2026 (this version, v2)] Title:Cross-Lingual Interleaving for Speech Language Models Authors:Adel Moumen, Guangzhi Sun, Philip C. Woodland View a PDF of the paper titled Cross-Lingual Interleaving for Speech Language Models, by Adel Moumen and 2 other authors View PDF HTML (experimental) Abstract:Spoken Language Models (SLMs) aim to learn linguistic competence directly from speech using discrete units, widening access to Natural Language Processing (NLP) technologies for languages with limited written resources. However, progress has been largely English-centric due to scarce spoken evaluation benchmarks and training data, making cross-lingual learning difficult. We present a cross-lingual interleaving method that mixes speech tokens across languages without textual supervision. We also release an EN-FR training dataset, TinyStories (~42k hours), together with EN-FR spoken StoryCloze and TopicCloze benchmarks for cross-lingual semantic evaluation, both synthetically generated using GPT-4. On 360M and 1B SLMs under matched training-token budgets, interleaving improves monolingual semantic accuracy, enables robust cross-lingual continuation, and strengthens cross-lingual hidden-state alignment. Taken together, these results indicate that cross-lingual interleaving is a simple, scalable route to building multilingual SLMs that understand and converse across lan...