[2601.09282] Cluster Workload Allocation: Semantic Soft Affinity Using Natural Language Processing
Summary
This paper presents a novel semantic scheduling paradigm for cluster workload allocation using Natural Language Processing, enhancing usability and efficiency.
Why It Matters
The integration of Natural Language Processing with cluster workload allocation addresses the usability gap in complex configurations, making advanced scheduling accessible. This research demonstrates the potential of Large Language Models in improving scheduling quality and efficiency, which is crucial for optimizing resource allocation in cloud computing environments.
Key Takeaways
- Introduces a semantic, intent-driven scheduling paradigm for clusters.
- Utilizes Large Language Models to interpret natural language allocation hints.
- Achieves over 95% parsing accuracy with top-tier models.
- Demonstrates superior scheduling quality compared to standard Kubernetes setups.
- Highlights the need for asynchronous processing to mitigate LLM latency.
Computer Science > Artificial Intelligence arXiv:2601.09282 (cs) [Submitted on 14 Jan 2026 (v1), last revised 20 Feb 2026 (this version, v2)] Title:Cluster Workload Allocation: Semantic Soft Affinity Using Natural Language Processing Authors:Leszek Sliwko, Jolanta Mizeria-Pietraszko View a PDF of the paper titled Cluster Workload Allocation: Semantic Soft Affinity Using Natural Language Processing, by Leszek Sliwko and 1 other authors View PDF Abstract:Cluster workload allocation often requires complex configurations, creating a usability gap. This paper introduces a semantic, intent-driven scheduling paradigm for cluster systems using Natural Language Processing. The system employs a Large Language Model (LLM) integrated via a Kubernetes scheduler extender to interpret natural language allocation hint annotations for soft affinity preferences. A prototype featuring a cluster state cache and an intent analyzer (using AWS Bedrock) was developed. Empirical evaluation demonstrated high LLM parsing accuracy (>95% Subset Accuracy on an evaluation ground-truth dataset) for top-tier models like Amazon Nova Pro/Premier and Mistral Pixtral Large, significantly outperforming a baseline engine. Scheduling quality tests across six scenarios showed the prototype achieved superior or equivalent placement compared to standard Kubernetes configurations, particularly excelling in complex and quantitative scenarios and handling conflicting soft preferences. The results validate using LLMs f...