[2506.23036] Parameter Stress Analysis in Reinforcement Learning: Applying Synaptic Filtering to Policy Networks
About this article
Abstract page for arXiv paper 2506.23036: Parameter Stress Analysis in Reinforcement Learning: Applying Synaptic Filtering to Policy Networks
Computer Science > Machine Learning arXiv:2506.23036 (cs) [Submitted on 28 Jun 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:Parameter Stress Analysis in Reinforcement Learning: Applying Synaptic Filtering to Policy Networks Authors:Zain ul Abdeen, Ming Jin View a PDF of the paper titled Parameter Stress Analysis in Reinforcement Learning: Applying Synaptic Filtering to Policy Networks, by Zain ul Abdeen and 1 other authors View PDF Abstract:This paper explores reinforcement learning (RL) policy robustness by systematically analyzing network parameters under internal and external stresses. \textcolor{black}{We apply synaptic filtering methods using high-pass, low-pass, and pulse-wave filters from} \citep{pravin2024fragility}, as an internal stress by selectively perturbing parameters, while adversarial attacks apply external stress through modified agent observations. This dual approach enables the classification of parameters as \textit{fragile}, \textit{robust}, or \textit{antifragile}, based on their influence on policy performance in clean and adversarial settings. Parameter scores are defined to quantify these characteristics, and the framework is validated on proximal policy optimization (PPO)-trained agents in Mujoco continuous control environments. The results highlight the presence of antifragile parameters that enhance policy performance under stress, demonstrating the potential of targeted filtering techniques to improve RL policy adaptability. Th...