[2511.03595] Tensor-Efficient High-Dimensional Q-learning
About this article
Abstract page for arXiv paper 2511.03595: Tensor-Efficient High-Dimensional Q-learning
Computer Science > Machine Learning arXiv:2511.03595 (cs) [Submitted on 5 Nov 2025 (v1), last revised 6 Apr 2026 (this version, v2)] Title:Tensor-Efficient High-Dimensional Q-learning Authors:Junyi Wu, Dan Li View a PDF of the paper titled Tensor-Efficient High-Dimensional Q-learning, by Junyi Wu and 1 other authors View PDF HTML (experimental) Abstract:High-dimensional reinforcement learning(RL) faces challenges with complex calculations and low sample efficiency in large state-action spaces. Q-learning algorithms struggle particularly with the curse of dimensionality, where the number of state-action pairs grows exponentially with problem size. While neural network-based approaches like Deep Q-Networks have shown success, they do not explicitly exploit problem structure. Many high-dimensional control tasks exhibit low-rank structure in their value functions, and tensor-based methods using low-rank decomposition offer parameter-efficient representations. However, existing tensor-based Q-learning methods focus on representation fidelity without leveraging this structure for exploration. We propose Tensor-Efficient Q-Learning (TEQL), which represents the Q-function as a low-rank CP tensor over discretized state-action spaces and exploits the tensor structure for uncertainty-aware exploration. TEQL incorporates Error-Uncertainty Guided Exploration (EUGE), which combines tensor approximation error with visit counts to guide action selection, along with frequency-aware regular...