[2603.00474] Wireless Power Control Based on Large Language Models
About this article
Abstract page for arXiv paper 2603.00474: Wireless Power Control Based on Large Language Models
Computer Science > Information Theory arXiv:2603.00474 (cs) [Submitted on 28 Feb 2026] Title:Wireless Power Control Based on Large Language Models Authors:Jiacheng Wang, Yucheng Sheng, Le Liang, Hao Ye, Shi Jin View a PDF of the paper titled Wireless Power Control Based on Large Language Models, by Jiacheng Wang and 4 other authors View PDF HTML (experimental) Abstract:This paper investigates the power control problem in wireless networks by repurposing pre-trained large language models (LLMs) as relational reasoning backbones. In hyper-connected interference environments, traditional optimization methods face high computational cost, while standard message passing neural networks suffer from aggregation bottlenecks that can obscure critical high-interference structures. In response, we propose PC-LLM, a physics-informed framework that augments a pre-trained Transformer with an interference-aware attention bias. The proposed bias tuning mechanism injects the physical channel gain matrix directly into the self-attention logits, enabling explicit fusion of wireless topology with pre-trained relational priors without retraining the backbone from scratch. Extensive experiments demonstrate that PC-LLM consistently outperforms both traditional optimization methods and state-of-the-art graph neural network baselines, while exhibiting exceptional zero-shot generalization to unseen environments. We further observe a structural-semantic decoupling phenomenon: Topology-relevant relat...