[2603.04768] Distributional Reinforcement Learning with Information Bottleneck for Uncertainty-Aware DRAM Equalization
About this article
Abstract page for arXiv paper 2603.04768: Distributional Reinforcement Learning with Information Bottleneck for Uncertainty-Aware DRAM Equalization
Computer Science > Machine Learning arXiv:2603.04768 (cs) [Submitted on 5 Mar 2026] Title:Distributional Reinforcement Learning with Information Bottleneck for Uncertainty-Aware DRAM Equalization Authors:Muhammad Usama, Dong Eui Chang View a PDF of the paper titled Distributional Reinforcement Learning with Information Bottleneck for Uncertainty-Aware DRAM Equalization, by Muhammad Usama and Dong Eui Chang View PDF HTML (experimental) Abstract:Equalizer parameter optimization is critical for signal integrity in high-speed memory systems operating at multi-gigabit data rates. However, existing methods suffer from computationally expensive eye diagram evaluation, optimization of expected rather than worst-case performance, and absence of uncertainty quantification for deployment decisions. In this paper, we propose a distributional risk-sensitive reinforcement learning framework integrating Information Bottleneck latent representations with Conditional Value-at-Risk optimization. We introduce rate-distortion optimal signal compression achieving 51 times speedup over eye diagrams while quantifying epistemic uncertainty through Monte Carlo dropout. Distributional reinforcement learning with quantile regression enables explicit worst-case optimization, while PAC-Bayesian regularization certifies generalization bounds. Experimental validation on 2.4 million waveforms from eight memory units demonstrated mean improvements of 37.1\% and 41.5\% for 4-tap and 8-tap equalizer configu...