[2510.06714] Dual Goal Representations
Summary
The paper introduces dual goal representations for goal-conditioned reinforcement learning (GCRL), enhancing state characterization and improving goal-reaching performance in various tasks.
Why It Matters
This research is significant as it presents a novel approach to reinforcement learning that can enhance the efficiency of goal-reaching algorithms. By focusing on intrinsic dynamics and filtering noise, it offers a robust method that can be applied across diverse environments, making it relevant for advancements in AI and machine learning applications.
Key Takeaways
- Dual goal representations improve state characterization in GCRL.
- The approach is invariant to original state representations.
- It filters out exogenous noise while retaining essential information.
- Empirical results show improved performance in 20 diverse tasks.
- The method can be integrated with existing GCRL algorithms.
Computer Science > Machine Learning arXiv:2510.06714 (cs) [Submitted on 8 Oct 2025 (v1), last revised 15 Feb 2026 (this version, v2)] Title:Dual Goal Representations Authors:Seohong Park, Deepinder Mann, Sergey Levine View a PDF of the paper titled Dual Goal Representations, by Seohong Park and 2 other authors View PDF Abstract:In this work, we introduce dual goal representations for goal-conditioned reinforcement learning (GCRL). A dual goal representation characterizes a state by "the set of temporal distances from all other states"; in other words, it encodes a state through its relations to every other state, measured by temporal distance. This representation provides several appealing theoretical properties. First, it depends only on the intrinsic dynamics of the environment and is invariant to the original state representation. Second, it contains provably sufficient information to recover an optimal goal-reaching policy, while being able to filter out exogenous noise. Based on this concept, we develop a practical goal representation learning method that can be combined with any existing GCRL algorithm. Through diverse experiments on the OGBench task suite, we empirically show that dual goal representations consistently improve offline goal-reaching performance across 20 state- and pixel-based tasks. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2510.06714 [cs.LG] (or arXiv:2510.06714v2 [cs.LG] for this version) http...