[2602.17217] Continual learning and refinement of causal models through dynamic predicate invention
Summary
This paper presents a framework for online construction of symbolic causal world models, enhancing agents' decision-making through continual learning and dynamic predicate invention.
Why It Matters
The research addresses the limitations of traditional world modeling methods in AI, particularly their inefficiency and lack of transparency. By proposing a new approach that integrates continuous model learning, this work has implications for improving AI's ability to navigate complex environments, making it relevant for advancements in artificial intelligence and machine learning.
Key Takeaways
- Introduces a framework for dynamic predicate invention in causal models.
- Enhances sample efficiency compared to traditional neural network methods.
- Demonstrates scalability in complex relational dynamics.
- Integrates continuous model learning into agent decision-making.
- Addresses issues of transparency and efficiency in AI world modeling.
Computer Science > Artificial Intelligence arXiv:2602.17217 (cs) [Submitted on 19 Feb 2026] Title:Continual learning and refinement of causal models through dynamic predicate invention Authors:Enrique Crespo-Fernandez, Oliver Ray, Telmo de Menezes e Silva Filho, Peter Flach View a PDF of the paper titled Continual learning and refinement of causal models through dynamic predicate invention, by Enrique Crespo-Fernandez and 3 other authors View PDF HTML (experimental) Abstract:Efficiently navigating complex environments requires agents to internalize the underlying logic of their world, yet standard world modelling methods often struggle with sample inefficiency, lack of transparency, and poor scalability. We propose a framework for constructing symbolic causal world models entirely online by integrating continuous model learning and repair into the agent's decision loop, by leveraging the power of Meta-Interpretive Learning and predicate invention to find semantically meaningful and reusable abstractions, allowing an agent to construct a hierarchy of disentangled, high-quality concepts from its observations. We demonstrate that our lifted inference approach scales to domains with complex relational dynamics, where propositional methods suffer from combinatorial explosion, while achieving sample-efficiency orders of magnitude higher than the established PPO neural-network-based baseline. Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2602.17217 [cs.AI] (or arXiv:...