[2603.24738] Decentralized Task Scheduling in Distributed Systems: A Deep Reinforcement Learning Approach
About this article
Abstract page for arXiv paper 2603.24738: Decentralized Task Scheduling in Distributed Systems: A Deep Reinforcement Learning Approach
Computer Science > Distributed, Parallel, and Cluster Computing arXiv:2603.24738 (cs) [Submitted on 25 Mar 2026] Title:Decentralized Task Scheduling in Distributed Systems: A Deep Reinforcement Learning Approach Authors:Daniel Benniah John View a PDF of the paper titled Decentralized Task Scheduling in Distributed Systems: A Deep Reinforcement Learning Approach, by Daniel Benniah John View PDF Abstract:Efficient task scheduling in large-scale distributed systems presents significant challenges due to dynamic workloads, heterogeneous resources, and competing quality-of-service requirements. Traditional centralized approaches face scalability limitations and single points of failure, while classical heuristics lack adaptability to changing conditions. This paper proposes a decentralized multi-agent deep reinforcement learning (DRL-MADRL) framework for task scheduling in heterogeneous distributed systems. We formulate the problem as a Decentralized Partially Observable Markov Decision Process (Dec-POMDP) and develop a lightweight actor-critic architecture implemented using only NumPy, enabling deployment on resource-constrained edge devices without heavyweight machine learning frameworks. Using workload characteristics derived from the publicly available Google Cluster Trace dataset, we evaluate our approach on a 100-node heterogeneous system processing 1,000 tasks per episode over 30 experimental runs. Experimental results demonstrate 15.6% improvement in average task comple...