[2602.19419] RAmmStein: Regime Adaptation in Mean-reverting Markets with Stein Thresholds -- Optimal Impulse Control in Concentrated AMMs
Summary
The paper presents RAmmStein, a Deep Reinforcement Learning approach for optimal liquidity management in decentralized exchanges, focusing on impulse control in mean-reverting markets.
Why It Matters
This research addresses the challenges faced by liquidity providers in decentralized finance, particularly in managing trade-offs between maximizing returns and minimizing operational costs. By introducing a novel control framework, it enhances capital efficiency and offers insights into market dynamics, which is crucial for both practitioners and researchers in the field.
Key Takeaways
- RAmmStein optimizes liquidity management using Deep Reinforcement Learning.
- The method reduces rebalancing frequency by 67% while maintaining high active time.
- It achieves a net ROI of 0.72%, outperforming traditional strategies.
- The framework incorporates market dynamics through mean-reversion speed.
- Regime-aware laziness improves capital efficiency by minimizing operational costs.
Computer Science > Machine Learning arXiv:2602.19419 (cs) [Submitted on 23 Feb 2026] Title:RAmmStein: Regime Adaptation in Mean-reverting Markets with Stein Thresholds -- Optimal Impulse Control in Concentrated AMMs Authors:Pranay Anchuri View a PDF of the paper titled RAmmStein: Regime Adaptation in Mean-reverting Markets with Stein Thresholds -- Optimal Impulse Control in Concentrated AMMs, by Pranay Anchuri View PDF HTML (experimental) Abstract:Concentrated liquidity provision in decentralized exchanges presents a fundamental Impulse Control problem. Liquidity Providers (LPs) face a non-trivial trade-off between maximizing fee accrual through tight price-range concentration and minimizing the friction costs of rebalancing, including gas fees and swap slippage. Existing methods typically employ heuristic or threshold strategies that fail to account for market dynamics. This paper formulates liquidity management as an optimal control problem and derives the corresponding Hamilton-Jacobi-Bellman quasi-variational inequality (HJB-QVI). We present an approximate solution RAmmStein, a Deep Reinforcement Learning method that incorporates the mean-reversion speed (theta) of an Ornstein-Uhlenbeck process among other features as input to the model. We demonstrate that the agent learns to separate the state space into regions of action and inaction. We evaluate the framework using high-frequency 1Hz Coinbase trade data comprising over 6.8M trades. Experimental results show that RA...