[2410.16106] Statistical Inference for Temporal Difference Learning with Linear Function Approximation
Summary
This paper explores the statistical properties of Temporal Difference learning with Polyak-Ruppert averaging, enhancing parameter estimation for linear function approximation in reinforcement learning.
Why It Matters
The findings contribute significantly to the field of reinforcement learning by providing refined statistical bounds and improved estimators, which can enhance the reliability and efficiency of learning algorithms in practical applications. This research is crucial for advancing machine learning methodologies and ensuring better performance in real-world scenarios.
Key Takeaways
- Establishes refined high-dimensional Berry-Esseen bounds for TD learning.
- Introduces a novel online plug-in estimator for asymptotic covariance.
- Provides sharper convergence guarantees under weaker conditions.
- Enables the construction of confidence regions for linear parameters.
- Demonstrates theoretical findings through numerical experiments.
Statistics > Machine Learning arXiv:2410.16106 (stat) [Submitted on 21 Oct 2024 (v1), last revised 24 Feb 2026 (this version, v5)] Title:Statistical Inference for Temporal Difference Learning with Linear Function Approximation Authors:Weichen Wu, Gen Li, Yuting Wei, Alessandro Rinaldo View a PDF of the paper titled Statistical Inference for Temporal Difference Learning with Linear Function Approximation, by Weichen Wu and 3 other authors View PDF Abstract:We investigate the statistical properties of Temporal Difference (TD) learning with Polyak-Ruppert averaging, arguably one of the most widely used algorithms in reinforcement learning, for the task of estimating the parameters of the optimal linear approximation to the value function. Assuming independent samples, we make three theoretical contributions that improve upon the current state-of-the-art results: (i) we establish refined high-dimensional Berry-Esseen bounds over the class of convex sets, achieving faster rates than the best known results, and (ii) we propose and analyze a novel, computationally efficient online plug-in estimator of the asymptotic covariance matrix; (iii) we derive sharper high probability convergence guarantees that depend explicitly on the asymptotic variance and hold under weaker conditions than those adopted in the literature. These results enable the construction of confidence regions and simultaneous confidence intervals for the linear parameters of the value function approximation, with ...