[2602.13934] Why Code, Why Now: Learnability, Computability, and the Real Limits of Machine Learning

[2602.13934] Why Code, Why Now: Learnability, Computability, and the Real Limits of Machine Learning

arXiv - Machine Learning 3 min read Article

Summary

The paper discusses the learnability and computability limits of machine learning, emphasizing the structured feedback of code generation compared to reinforcement learning, and proposes a hierarchy of learnability.

Why It Matters

Understanding the limits of machine learning is crucial for researchers and practitioners. This paper challenges the assumption that simply scaling models will address all challenges, providing a framework for evaluating the learnability of tasks, which can guide future research and applications in AI.

Key Takeaways

  • Code generation benefits from structured feedback, unlike reinforcement learning.
  • A five-level hierarchy of learnability is proposed based on information structure.
  • The ceiling on ML progress is more about task learnability than model size.
  • Supervised learning on code scales predictably, while reinforcement learning does not.
  • The assumption that scaling alone will solve ML challenges should be scrutinized.

Computer Science > Machine Learning arXiv:2602.13934 (cs) [Submitted on 15 Feb 2026] Title:Why Code, Why Now: Learnability, Computability, and the Real Limits of Machine Learning Authors:Zhimin Zhao View a PDF of the paper titled Why Code, Why Now: Learnability, Computability, and the Real Limits of Machine Learning, by Zhimin Zhao View PDF HTML (experimental) Abstract:Code generation has progressed more reliably than reinforcement learning, largely because code has an information structure that makes it learnable. Code provides dense, local, verifiable feedback at every token, whereas most reinforcement learning problems do not. This difference in feedback quality is not binary but graded. We propose a five-level hierarchy of learnability based on information structure and argue that the ceiling on ML progress depends less on model size than on whether a task is learnable at all. The hierarchy rests on a formal distinction among three properties of computational problems (expressibility, computability, and learnability). We establish their pairwise relationships, including where implications hold and where they fail, and present a unified template that makes the structural differences explicit. The analysis suggests why supervised learning on code scales predictably while reinforcement learning does not, and why the common assumption that scaling alone will solve remaining ML challenges warrants scrutiny. Subjects: Machine Learning (cs.LG); Computation and Language (cs.CL...

Related Articles

Llms

Looking to build a production-level AI/ML project (agentic systems), need guidance on what to build

Hi everyone, I’m a final-year undergraduate AI/ML student currently focusing on applied AI / agentic systems. So far, I’ve spent time und...

Reddit - ML Jobs · 1 min ·
Meta is reentering the AI race with a new model called Muse Spark | The Verge
Machine Learning

Meta is reentering the AI race with a new model called Muse Spark | The Verge

Meta Superintelligence Labs has unveiled a new AI model called Muse Spark that will soon roll out across apps like Instagram and Facebook.

The Verge - AI · 5 min ·
Llms

[P] Building a LLM from scratch with Mary Shelley's "Frankenstein" (on Kaggle)

Notebook on GitHub: https://github.com/Buzzpy/Python-Machine-Learning-Models/blob/main/Frankenstein/train-frankenstein.ipynb submitted by...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] How are reviewers able to get away without providing acknowledgement in ICML 2026?

Today officially marks the end of the author-reviewer discussion period. The acknowledgement deadline has already passed by over 3 days a...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime