[2510.11491] Constraint-Aware Reinforcement Learning via Adaptive Action Scaling

[2510.11491] Constraint-Aware Reinforcement Learning via Adaptive Action Scaling

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2510.11491: Constraint-Aware Reinforcement Learning via Adaptive Action Scaling

Computer Science > Robotics arXiv:2510.11491 (cs) [Submitted on 13 Oct 2025 (v1), last revised 2 Apr 2026 (this version, v2)] Title:Constraint-Aware Reinforcement Learning via Adaptive Action Scaling Authors:Murad Dawood, Usama Ahmed Siddiquie, Shahram Khorshidi, Maren Bennewitz View a PDF of the paper titled Constraint-Aware Reinforcement Learning via Adaptive Action Scaling, by Murad Dawood and 3 other authors View PDF HTML (experimental) Abstract:Safe reinforcement learning (RL) seeks to mitigate unsafe behaviors that arise from exploration during training by reducing constraint violations while maintaining task performance. Existing approaches typically rely on a single policy to jointly optimize reward and safety, which can cause instability due to conflicting objectives, or they use external safety filters that override actions and require prior system knowledge. In this paper, we propose a modular cost-aware regulator that scales the agent's actions based on predicted constraint violations, preserving exploration through smooth action modulation rather than overriding the policy. The regulator is trained to minimize constraint violations while avoiding degenerate suppression of actions. Our approach integrates seamlessly with off-policy RL methods such as SAC and TD3, and achieves state-of-the-art return-to-cost ratios on Safety Gym locomotion tasks with sparse costs, reducing constraint violations by up to 126 times while increasing returns by over an order of magn...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

Llms

Training-time intervention yields 63.4% blind-pair human preference at matched val-loss (1.2B params, 320 judgments, p = 1.98 × 10⁻⁵) [R]

TL;DR. I ran a blind A/B preference evaluation between two 1.2B-parameter LMs trained on identical data (same order, same seed, 30K steps...

Reddit - Machine Learning · 1 min ·
Machine Learning

I can't believe text normalization is so underdiscussed in streaming text-to-speech [D]

Kinda suprises me how little discussion there is around about mistakes in streaming TTS models People look for natural readers, high voic...

Reddit - Machine Learning · 1 min ·
Anthropic’s most dangerous AI model just fell into the wrong hands | The Verge
Machine Learning

Anthropic’s most dangerous AI model just fell into the wrong hands | The Verge

Anthropic’s powerful Mythos cybersecurity AI model has been accessed by a “small group of unauthorised users.”

The Verge - AI · 4 min ·
Machine Learning

MachineTranslation.com Got 2 More AI Models – So You Never Have to Trust Just One

The page is currently inaccessible due to a 403 Forbidden error.

AI Tools & Products · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime