[2305.11098] A Simple Generative Model of Logical Reasoning and Statistical Learning
Summary
This paper presents a Bayesian model that unifies logical reasoning and statistical learning, proposing a framework for human-like machine intelligence.
Why It Matters
The research addresses a significant gap in AI by providing a theoretical foundation that links statistical learning with logical reasoning. This could lead to advancements in creating more sophisticated AI systems that mimic human cognitive processes, enhancing their applicability in various fields.
Key Takeaways
- Introduces a Bayesian model that integrates logical reasoning and statistical learning.
- The model adheres to Kolmogorov's axioms and performs exact Bayesian inference efficiently.
- Offers a new perspective on symbolic reasoning through causality in data.
- Demonstrates theoretical correctness against established methods like K nearest neighbour.
- Provides insights into developing human-like machine intelligence.
Computer Science > Artificial Intelligence arXiv:2305.11098 (cs) [Submitted on 18 May 2023 (v1), last revised 23 Feb 2026 (this version, v2)] Title:A Simple Generative Model of Logical Reasoning and Statistical Learning Authors:Hiroyuki Kido View a PDF of the paper titled A Simple Generative Model of Logical Reasoning and Statistical Learning, by Hiroyuki Kido View PDF HTML (experimental) Abstract:Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence. Most existing work considers how to combine existing logical and statistical systems. However, there is no theory of inference so far explaining how basic approaches to statistical learning and logical reasoning stem from a common principle. Inspired by the fact that much empirical work in neuroscience suggests Bayesian (or probabilistic generative) approaches to brain function including learning and reasoning, we here propose a simple Bayesian model of logical reasoning and statistical learning. The theory is statistically correct as it satisfies Kolmogorov's axioms, is consistent with both Fenstad's representation theorem and maximum likelihood estimation and performs exact Bayesian inference with a linear-time complexity. The theory is logically correct as it is a data-driven generalisation of uncertain reasoning from consistency, possibility, inconsistency and impossibility. The theory is correct in terms of machine learning as its solution to gene...