[2411.06498] Barriers to Complexity-Theoretic Proofs that "AGI" Using Machine Learning is Impossible
About this article
Abstract page for arXiv paper 2411.06498: Barriers to Complexity-Theoretic Proofs that "AGI" Using Machine Learning is Impossible
Computer Science > Artificial Intelligence arXiv:2411.06498 (cs) [Submitted on 10 Nov 2024 (v1), last revised 4 Apr 2026 (this version, v2)] Title:Barriers to Complexity-Theoretic Proofs that "AGI" Using Machine Learning is Impossible Authors:Michael Guerzhoy View a PDF of the paper titled Barriers to Complexity-Theoretic Proofs that "AGI" Using Machine Learning is Impossible, by Michael Guerzhoy View PDF HTML (experimental) Abstract:A recent paper (van Rooij et al. 2024) claims to have proved that achieving human-like intelligence using learning from data is intractable in a complexity-theoretic sense. We point out that the proof relies on an unjustified assumption about the distribution of (input, output) tuples in the data. We briefly discuss that assumption in the context of two fundamental barriers to repairing the proof: the need to precisely define ``human-like," and the need to account for the fact that a particular machine learning system will have particular inductive biases that are key to the analysis. Another attempt to repair the proof, by focusing on subsets of the data, faces barriers in terms of defining the subsets. Subjects: Artificial Intelligence (cs.AI); Computational Complexity (cs.CC) Cite as: arXiv:2411.06498 [cs.AI] (or arXiv:2411.06498v2 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2411.06498 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Michael Guerzhoy [view email] [v1] Sun, 10 Nov 2024 15:47:30 UTC ...