[2603.02156] How Small Can 6G Reason? Scaling Tiny Language Models for AI-Native Networks
About this article
Abstract page for arXiv paper 2603.02156: How Small Can 6G Reason? Scaling Tiny Language Models for AI-Native Networks
Computer Science > Networking and Internet Architecture arXiv:2603.02156 (cs) [Submitted on 2 Mar 2026] Title:How Small Can 6G Reason? Scaling Tiny Language Models for AI-Native Networks Authors:Mohamed Amine Ferrag, Abderrahmane Lakas, Merouane Debbah View a PDF of the paper titled How Small Can 6G Reason? Scaling Tiny Language Models for AI-Native Networks, by Mohamed Amine Ferrag and 2 other authors View PDF HTML (experimental) Abstract:Emerging 6G visions, reflected in ongoing standardization efforts within 3GPP, IETF, ETSI, ITU-T, and the O-RAN Alliance, increasingly characterize networks as AI-native systems in which high-level semantic reasoning layers operate above standardized control and data-plane functions. Although frontier-scale large language models (LLMs) such as Qwen2.5-7B and Olmo-3-7B demonstrate strong reasoning capability, their computational footprint limits deployment in latency-sensitive, edge-native infrastructures. This paper presents a systematic empirical study of the scaling behavior and deployment efficiency of compact language models for network-level semantic reasoning in AI-native 6G systems. Using 6G-Bench, a standardization-aligned benchmark comprising 30 decision-making tasks across five capability domains, we evaluate models ranging from 135M (SmolLM2-135M) to 7B parameters (Qwen2.5-7B), including mid-scale architectures such as Llama-3.2-1B, Granite-1B, and Qwen2.5-3B. Deterministic accuracy (pass@1) increases from 0.224 at 135M to 0.7...