[2305.11098] A Simple Generative Model of Logical Reasoning and Statistical Learning

[2305.11098] A Simple Generative Model of Logical Reasoning and Statistical Learning

arXiv - AI 4 min read Article

Summary

This paper presents a Bayesian model that unifies logical reasoning and statistical learning, proposing a framework for human-like machine intelligence.

Why It Matters

The research addresses a significant gap in AI by providing a theoretical foundation that links statistical learning with logical reasoning. This could lead to advancements in creating more sophisticated AI systems that mimic human cognitive processes, enhancing their applicability in various fields.

Key Takeaways

  • Introduces a Bayesian model that integrates logical reasoning and statistical learning.
  • The model adheres to Kolmogorov's axioms and performs exact Bayesian inference efficiently.
  • Offers a new perspective on symbolic reasoning through causality in data.
  • Demonstrates theoretical correctness against established methods like K nearest neighbour.
  • Provides insights into developing human-like machine intelligence.

Computer Science > Artificial Intelligence arXiv:2305.11098 (cs) [Submitted on 18 May 2023 (v1), last revised 23 Feb 2026 (this version, v2)] Title:A Simple Generative Model of Logical Reasoning and Statistical Learning Authors:Hiroyuki Kido View a PDF of the paper titled A Simple Generative Model of Logical Reasoning and Statistical Learning, by Hiroyuki Kido View PDF HTML (experimental) Abstract:Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence. Most existing work considers how to combine existing logical and statistical systems. However, there is no theory of inference so far explaining how basic approaches to statistical learning and logical reasoning stem from a common principle. Inspired by the fact that much empirical work in neuroscience suggests Bayesian (or probabilistic generative) approaches to brain function including learning and reasoning, we here propose a simple Bayesian model of logical reasoning and statistical learning. The theory is statistically correct as it satisfies Kolmogorov's axioms, is consistent with both Fenstad's representation theorem and maximum likelihood estimation and performs exact Bayesian inference with a linear-time complexity. The theory is logically correct as it is a data-driven generalisation of uncertain reasoning from consistency, possibility, inconsistency and impossibility. The theory is correct in terms of machine learning as its solution to gene...

Related Articles

Machine Learning

Your prompts aren’t the problem — something else is

I keep seeing people focus heavily on prompt optimization. But in practice, a lot of failures I’ve observed don’t come from the prompt it...

Reddit - Artificial Intelligence · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Those of you with 10+ years in ML — what is the public completely wrong about?

For those of you who've been in ML/AI research or applied ML for 10+ years — what's the gap between what the public thinks AI is doing vs...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime