[2602.23997] Foundation World Models for Agents that Learn, Verify, and Adapt Reliably Beyond Static Environments
About this article
Abstract page for arXiv paper 2602.23997: Foundation World Models for Agents that Learn, Verify, and Adapt Reliably Beyond Static Environments
Computer Science > Machine Learning arXiv:2602.23997 (cs) [Submitted on 27 Feb 2026] Title:Foundation World Models for Agents that Learn, Verify, and Adapt Reliably Beyond Static Environments Authors:Florent Delgrange View a PDF of the paper titled Foundation World Models for Agents that Learn, Verify, and Adapt Reliably Beyond Static Environments, by Florent Delgrange View PDF HTML (experimental) Abstract:The next generation of autonomous agents must not only learn efficiently but also act reliably and adapt their behavior in open worlds. Standard approaches typically assume fixed tasks and environments with little or no novelty, which limits world models' ability to support agents that must evolve their policies as conditions change. This paper outlines a vision for foundation world models: persistent, compositional representations that unify reinforcement learning, reactive/program synthesis, and abstraction mechanisms. We propose an agenda built around four components: (i) learnable reward models from specifications to support optimization with clear objectives; (ii) adaptive formal verification integrated throughout learning; (iii) online abstraction calibration to quantify the reliability of the model's predictions; and (iv) test-time synthesis and world-model generation guided by verifiers. Together, these components enable agents to synthesize verifiable programs, derive new policies from a small number of interactions, and maintain correctness while adapting to no...