[2602.23446] Human Supervision as an Information Bottleneck: A Unified Theory of Error Floors in Human-Guided Learning

[2602.23446] Human Supervision as an Information Bottleneck: A Unified Theory of Error Floors in Human-Guided Learning

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2602.23446: Human Supervision as an Information Bottleneck: A Unified Theory of Error Floors in Human-Guided Learning

Computer Science > Machine Learning arXiv:2602.23446 (cs) [Submitted on 26 Feb 2026] Title:Human Supervision as an Information Bottleneck: A Unified Theory of Error Floors in Human-Guided Learning Authors:Alejandro Rodriguez Dominguez View a PDF of the paper titled Human Supervision as an Information Bottleneck: A Unified Theory of Error Floors in Human-Guided Learning, by Alejandro Rodriguez Dominguez View PDF HTML (experimental) Abstract:Large language models are trained primarily on human-generated data and feedback, yet they exhibit persistent errors arising from annotation noise, subjective preferences, and the limited expressive bandwidth of natural language. We argue that these limitations reflect structural properties of the supervision channel rather than model scale or optimization. We develop a unified theory showing that whenever the human supervision channel is not sufficient for a latent evaluation target, it acts as an information-reducing channel that induces a strictly positive excess-risk floor for any learner dominated by it. We formalize this Human-Bounded Intelligence limit and show that across six complementary frameworks (operator theory, PAC-Bayes, information theory, causal inference, category theory, and game-theoretic analyses of reinforcement learning from human feedback), non-sufficiency yields strictly positive lower bounds arising from the same structural decomposition into annotation noise, preference distortion, and semantic compression. Th...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

Llms

[D] We reimplemented Claude Code entirely in Python — open source, works with local models

Hey everyone, We just released Claw Code Agent — a full Python reimplementation of the Claude Code agent architecture, based on the rever...

Reddit - Machine Learning · 1 min ·
Llms

[D] Production gaps in context-window compression for AI agent memory

've been working on AI memory infrastructure and recently spent a few weeks reading through the source code of an open-source context-win...

Reddit - Machine Learning · 1 min ·
Llms

How Claude Web tried to break out its container, provided all files on the system, scanned the networks, etc

Originally wasn't going to write about this - on one hand thought it's prolly already known, on the other hand I didn't feel like it was ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Combining the robot operating system with LLMs for natural-language control

Over the past few decades, robotics researchers have developed a wide range of increasingly advanced robots that can autonomously complet...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime