[2512.17466] Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks

[2512.17466] Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2512.17466: Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks

Electrical Engineering and Systems Science > Systems and Control arXiv:2512.17466 (eess) [Submitted on 19 Dec 2025 (v1), last revised 2 Apr 2026 (this version, v3)] Title:Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks Authors:Irched Chafaa, Giacomo Bacci, Luca Sanguinetti View a PDF of the paper titled Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks, by Irched Chafaa and 1 other authors View PDF HTML (experimental) Abstract:Optimal AP clustering and power allocation are critical in user-centric cell-free massive MIMO systems. Existing deep learning models lack flexibility to handle dynamic network configurations. Furthermore, many approaches overlook pilot contamination and suffer from high computational complexity. In this paper, we propose a lightweight transformer model that overcomes these limitations by jointly predicting AP clusters and powers solely from spatial coordinates of user devices and AP. Our model is architecture-agnostic to users load, handles both clustering and power allocation without channel estimation overhead, and eliminates pilot contamination by assigning users to AP within a pilot reuse constraint. We also incorporate a customized linear attention mechanism to capture user-AP interactions efficiently and enable linear scalability with respect to the number of users. Numerical results confirm the model's effectiveness in maximizing the minimum spectral...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

Llms

Training-time intervention yields 63.4% blind-pair human preference at matched val-loss (1.2B params, 320 judgments, p = 1.98 × 10⁻⁵) [R]

TL;DR. I ran a blind A/B preference evaluation between two 1.2B-parameter LMs trained on identical data (same order, same seed, 30K steps...

Reddit - Machine Learning · 1 min ·
Machine Learning

I can't believe text normalization is so underdiscussed in streaming text-to-speech [D]

Kinda suprises me how little discussion there is around about mistakes in streaming TTS models People look for natural readers, high voic...

Reddit - Machine Learning · 1 min ·
Anthropic’s most dangerous AI model just fell into the wrong hands | The Verge
Machine Learning

Anthropic’s most dangerous AI model just fell into the wrong hands | The Verge

Anthropic’s powerful Mythos cybersecurity AI model has been accessed by a “small group of unauthorised users.”

The Verge - AI · 4 min ·
Machine Learning

MachineTranslation.com Got 2 More AI Models – So You Never Have to Trust Just One

The page is currently inaccessible due to a 403 Forbidden error.

AI Tools & Products · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime