[2510.11418] Forward-Forward Autoencoder Architectures for Energy-Efficient Wireless Communications
Summary
This article presents Forward-Forward Autoencoder architectures aimed at enhancing energy efficiency in wireless communications, demonstrating competitive performance against traditional backpropagation methods.
Why It Matters
As energy efficiency becomes increasingly critical in wireless communications, this research introduces a novel approach that leverages Forward-Forward learning, potentially transforming how communication systems are designed and implemented. It highlights a method that can operate effectively without the need for differentiable channels, making it relevant for real-world applications.
Key Takeaways
- Forward-Forward learning offers an energy-efficient alternative to backpropagation for training neural networks.
- The proposed autoencoder architectures demonstrate competitive performance in various communication scenarios.
- Significant savings in memory and processing time are achieved compared to traditional methods.
- The approach does not require differentiable communication channels, broadening its applicability.
- Insights into training convergence behavior enhance understanding of the FF network design.
Computer Science > Information Theory arXiv:2510.11418 (cs) [Submitted on 13 Oct 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Forward-Forward Autoencoder Architectures for Energy-Efficient Wireless Communications Authors:Daniel Seifert, Onur Günlü, Rafael F. Schaefer View a PDF of the paper titled Forward-Forward Autoencoder Architectures for Energy-Efficient Wireless Communications, by Daniel Seifert and Onur G\"unl\"u and Rafael F. Schaefer View PDF Abstract:The application of deep learning to the area of communications systems has been a growing field of interest in recent years. Forward-forward (FF) learning is an efficient alternative to the backpropagation (BP) algorithm, which is the typically used training procedure for neural networks. Among its several advantages, FF learning does not require the communication channel to be differentiable and does not rely on the global availability of partial derivatives, allowing for an energy-efficient implementation. In this work, we design end-to-end learned autoencoders using the FF algorithm and numerically evaluate their performance for the additive white Gaussian noise and Rayleigh block fading channels. We demonstrate their competitiveness with BP-trained systems in the case of joint coding and modulation, and in a scenario where a fixed, non-differentiable modulation stage is applied. Moreover, we provide further insights into the design principles of the FF network, its training convergence behavior, ...