[2603.24634] Dual-Graph Multi-Agent Reinforcement Learning for Handover Optimization
About this article
Abstract page for arXiv paper 2603.24634: Dual-Graph Multi-Agent Reinforcement Learning for Handover Optimization
Computer Science > Networking and Internet Architecture arXiv:2603.24634 (cs) [Submitted on 25 Mar 2026] Title:Dual-Graph Multi-Agent Reinforcement Learning for Handover Optimization Authors:Matteo Salvatori, Filippo Vannella, Sebastian Macaluso, Stylianos E. Trevlakis, Carlos Segura Perales, José Suarez-Varela, Alexandros-Apostolos A. Boulogeorgos, Ioannis Arapakis View a PDF of the paper titled Dual-Graph Multi-Agent Reinforcement Learning for Handover Optimization, by Matteo Salvatori and 7 other authors View PDF HTML (experimental) Abstract:HandOver (HO) control in cellular networks is governed by a set of HO control parameters that are traditionally configured through rule-based heuristics. A key parameter for HO optimization is the Cell Individual Offset (CIO), defined for each pair of neighboring cells and used to bias HO triggering decisions. At network scale, tuning CIOs becomes a tightly coupled problem: small changes can redirect mobility flows across multiple neighbors, and static rules often degrade under non-stationary traffic and mobility. We exploit the pairwise structure of CIOs by formulating HO optimization as a Decentralized Partially Observable Markov Decision Process (Dec-POMDP) on the network's dual graph. In this representation, each agent controls a neighbor-pair CIO and observes Key Performance Indicators (KPIs) aggregated over its local dual-graph neighborhood, enabling scalable decentralized decisions while preserving graph locality. Building on...