[2602.17832] MePoly: Max Entropy Polynomial Policy Optimization

[2602.17832] MePoly: Max Entropy Polynomial Policy Optimization

arXiv - Machine Learning 3 min read Article

Summary

MePoly introduces a novel polynomial energy-based model for policy optimization in stochastic control, enhancing multi-modality representation and entropy maximization.

Why It Matters

This research addresses limitations in conventional parametric policies in reinforcement learning, offering a new approach that improves performance in complex decision-making tasks. By providing a tractable probability density, MePoly enhances the capabilities of policy-gradient optimization, which is crucial for advancing machine learning applications in robotics and beyond.

Key Takeaways

  • MePoly offers a new polynomial energy-based model for policy optimization.
  • It effectively captures complex non-convex manifolds in decision-making.
  • The method enhances entropy maximization through explicit probability density.
  • Empirical results show superior performance over existing baselines.
  • The approach is grounded in the classical moment problem, showcasing theoretical robustness.

Computer Science > Machine Learning arXiv:2602.17832 (cs) [Submitted on 19 Feb 2026] Title:MePoly: Max Entropy Polynomial Policy Optimization Authors:Hang Liu, Sangli Teng, Maani Ghaffari View a PDF of the paper titled MePoly: Max Entropy Polynomial Policy Optimization, by Hang Liu and 2 other authors View PDF HTML (experimental) Abstract:Stochastic Optimal Control provides a unified mathematical framework for solving complex decision-making problems, encompassing paradigms such as maximum entropy reinforcement learning(RL) and imitation learning(IL). However, conventional parametric policies often struggle to represent the multi-modality of the solutions. Though diffusion-based policies are aimed at recovering the multi-modality, they lack an explicit probability density, which complicates policy-gradient optimization. To bridge this gap, we propose MePoly, a novel policy parameterization based on polynomial energy-based models. MePoly provides an explicit, tractable probability density, enabling exact entropy maximization. Theoretically, we ground our method in the classical moment problem, leveraging the universal approximation capabilities for arbitrary distributions. Empirically, we demonstrate that MePoly effectively captures complex non-convex manifolds and outperforms baselines in performance across diverse benchmarks. Subjects: Machine Learning (cs.LG); Robotics (cs.RO) Cite as: arXiv:2602.17832 [cs.LG]   (or arXiv:2602.17832v1 [cs.LG] for this version)   https://...

Related Articles

[2602.08277] PISCO: Precise Video Instance Insertion with Sparse Control
Generative Ai

[2602.08277] PISCO: Precise Video Instance Insertion with Sparse Control

Abstract page for arXiv paper 2602.08277: PISCO: Precise Video Instance Insertion with Sparse Control

arXiv - AI · 4 min ·
[2511.18746] Any4D: Open-Prompt 4D Generation from Natural Language and Images
Machine Learning

[2511.18746] Any4D: Open-Prompt 4D Generation from Natural Language and Images

Abstract page for arXiv paper 2511.18746: Any4D: Open-Prompt 4D Generation from Natural Language and Images

arXiv - AI · 4 min ·
[2512.14549] Dual-objective Language Models: Training Efficiency Without Overfitting
Llms

[2512.14549] Dual-objective Language Models: Training Efficiency Without Overfitting

Abstract page for arXiv paper 2512.14549: Dual-objective Language Models: Training Efficiency Without Overfitting

arXiv - AI · 3 min ·
[2510.21011] Generating the Modal Worker: A Cross-Model Audit of Race and Gender in LLM-Generated Personas Across 41 Occupations
Llms

[2510.21011] Generating the Modal Worker: A Cross-Model Audit of Race and Gender in LLM-Generated Personas Across 41 Occupations

Abstract page for arXiv paper 2510.21011: Generating the Modal Worker: A Cross-Model Audit of Race and Gender in LLM-Generated Personas A...

arXiv - AI · 4 min ·
More in Generative Ai: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime