[2602.22519] A Mathematical Theory of Agency and Intelligence
Summary
This paper presents a mathematical framework for understanding agency and intelligence in AI systems, introducing the concept of bipredictability to measure the effectiveness of resource use in complex interactions.
Why It Matters
The study addresses a critical gap in AI development by distinguishing between agency and intelligence, emphasizing the need for systems that not only act but also learn and adapt. This framework could lead to more resilient AI systems capable of operating effectively in dynamic environments.
Key Takeaways
- Introduces bipredictability as a measure of effective resource use in AI.
- Distinguishes agency (action capacity) from intelligence (learning capacity).
- Proposes a feedback architecture for real-time monitoring of bipredictability.
- Demonstrates findings through experiments with physical systems and AI agents.
- Highlights the limitations of current AI systems in achieving true intelligence.
Computer Science > Artificial Intelligence arXiv:2602.22519 (cs) [Submitted on 26 Feb 2026] Title:A Mathematical Theory of Agency and Intelligence Authors:Wael Hafez, Chenan Wei, Rodrigo Felipe, Amir Nazeri, Cameron Reid View a PDF of the paper titled A Mathematical Theory of Agency and Intelligence, by Wael Hafez and 4 other authors View PDF Abstract:To operate reliably under changing conditions, complex systems require feedback on how effectively they use resources, not just whether objectives are met. Current AI systems process vast information to produce sophisticated predictions, yet predictions can appear successful while the underlying interaction with the environment degrades. What is missing is a principled measure of how much of the total information a system deploys is actually shared between its observations, actions, and outcomes. We prove this shared fraction, which we term bipredictability, P, is intrinsic to any interaction, derivable from first principles, and strictly bounded: P can reach unity in quantum systems, P equal to, or smaller than 0.5 in classical systems, and lower once agency (action selection) is introduced. We confirm these bounds in a physical system (double pendulum), reinforcement learning agents, and multi turn LLM conversations. These results distinguish agency from intelligence: agency is the capacity to act on predictions, whereas intelligence additionally requires learning from interaction, self-monitoring of its learning effectiven...