[2602.23148] On Sample-Efficient Generalized Planning via Learned Transition Models

[2602.23148] On Sample-Efficient Generalized Planning via Learned Transition Models

arXiv - AI 4 min read Article

Summary

This paper explores sample-efficient generalized planning through learned transition models, demonstrating improved performance over traditional action-sequence prediction methods.

Why It Matters

The research addresses the limitations of current generalized planning techniques that rely heavily on large datasets and model sizes. By proposing a method that learns explicit transition models, it offers a more efficient approach to planning in AI, which can lead to advancements in various applications, including robotics and automated decision-making.

Key Takeaways

  • Introduces a novel approach to generalized planning via learned transition models.
  • Demonstrates that explicit transition modeling enhances out-of-distribution plan success.
  • Achieves better sample efficiency with smaller models compared to traditional methods.
  • Evaluates multiple state representations and neural architectures for optimal performance.
  • Provides insights into the future of planning in AI and its applications.

Computer Science > Artificial Intelligence arXiv:2602.23148 (cs) [Submitted on 26 Feb 2026] Title:On Sample-Efficient Generalized Planning via Learned Transition Models Authors:Nitin Gupta, Vishal Pallagani, John A. Aydin, Biplav Srivastava View a PDF of the paper titled On Sample-Efficient Generalized Planning via Learned Transition Models, by Nitin Gupta and 3 other authors View PDF HTML (experimental) Abstract:Generalized planning studies the construction of solution strategies that generalize across families of planning problems sharing a common domain model, formally defined by a transition function $\gamma : S \times A \rightarrow S$. Classical approaches achieve such generalization through symbolic abstractions and explicit reasoning over $\gamma$. In contrast, recent Transformer-based planners, such as PlanGPT and Plansformer, largely cast generalized planning as direct action-sequence prediction, bypassing explicit transition modeling. While effective on in-distribution instances, these approaches typically require large datasets and model sizes, and often suffer from state drift in long-horizon settings due to the absence of explicit world-state evolution. In this work, we formulate generalized planning as a transition-model learning problem, in which a neural model explicitly approximates the successor-state function $\hat{\gamma} \approx \gamma$ and generates plans by rolling out symbolic state trajectories. Instead of predicting actions directly, the model aut...

Related Articles

Llms

What does Gemini think of you?

I noticed that Gemini was referring back to a lot of queries I've made in the past and was using that knowledge to drive follow up prompt...

Reddit - Artificial Intelligence · 1 min ·
Llms

This app helps you see what LLMs you can run on your hardware

submitted by /u/dev_is_active [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

TRACER: Learn-to-Defer for LLM Classification with Formal Teacher-Agreement Guarantees

I'm releasing TRACER (Trace-Based Adaptive Cost-Efficient Routing), a library for learning cost-efficient routing policies from LLM trace...

Reddit - Machine Learning · 1 min ·
Mistral AI raises $830M in debt to set up a data center near Paris | TechCrunch
Llms

Mistral AI raises $830M in debt to set up a data center near Paris | TechCrunch

Mistral aims to start operating the data center by the second quarter of 2026.

TechCrunch - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime