[2506.23036] Parameter Stress Analysis in Reinforcement Learning: Applying Synaptic Filtering to Policy Networks
Nlp

[2506.23036] Parameter Stress Analysis in Reinforcement Learning: Applying Synaptic Filtering to Policy Networks

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2506.23036: Parameter Stress Analysis in Reinforcement Learning: Applying Synaptic Filtering to Policy Networks

Computer Science > Machine Learning arXiv:2506.23036 (cs) [Submitted on 28 Jun 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:Parameter Stress Analysis in Reinforcement Learning: Applying Synaptic Filtering to Policy Networks Authors:Zain ul Abdeen, Ming Jin View a PDF of the paper titled Parameter Stress Analysis in Reinforcement Learning: Applying Synaptic Filtering to Policy Networks, by Zain ul Abdeen and 1 other authors View PDF Abstract:This paper explores reinforcement learning (RL) policy robustness by systematically analyzing network parameters under internal and external stresses. \textcolor{black}{We apply synaptic filtering methods using high-pass, low-pass, and pulse-wave filters from} \citep{pravin2024fragility}, as an internal stress by selectively perturbing parameters, while adversarial attacks apply external stress through modified agent observations. This dual approach enables the classification of parameters as \textit{fragile}, \textit{robust}, or \textit{antifragile}, based on their influence on policy performance in clean and adversarial settings. Parameter scores are defined to quantify these characteristics, and the framework is validated on proximal policy optimization (PPO)-trained agents in Mujoco continuous control environments. The results highlight the presence of antifragile parameters that enhance policy performance under stress, demonstrating the potential of targeted filtering techniques to improve RL policy adaptability. Th...

Originally published on March 06, 2026. Curated by AI News.

Related Articles

Nlp

[P] Using YouTube as a data source (lessons from building a coffee domain dataset)

I started working on a small coffee coaching app recently - something that could answer questions around brew methods, grind size, extrac...

Reddit - Machine Learning · 1 min ·
[2601.13227] Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?
Llms

[2601.13227] Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?

Abstract page for arXiv paper 2601.13227: Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?

arXiv - AI · 3 min ·
[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations
Llms

[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations

Abstract page for arXiv paper 2601.22440: AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Value...

arXiv - AI · 4 min ·
[2601.13222] Incorporating Q&A Nuggets into Retrieval-Augmented Generation
Nlp

[2601.13222] Incorporating Q&A Nuggets into Retrieval-Augmented Generation

Abstract page for arXiv paper 2601.13222: Incorporating Q&A Nuggets into Retrieval-Augmented Generation

arXiv - AI · 3 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime