[2603.27438] The Novelty Bottleneck: A Framework for Understanding Human Effort Scaling in AI-Assisted Work

[2603.27438] The Novelty Bottleneck: A Framework for Understanding Human Effort Scaling in AI-Assisted Work

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.27438: The Novelty Bottleneck: A Framework for Understanding Human Effort Scaling in AI-Assisted Work

Computer Science > Artificial Intelligence arXiv:2603.27438 (cs) [Submitted on 28 Mar 2026] Title:The Novelty Bottleneck: A Framework for Understanding Human Effort Scaling in AI-Assisted Work Authors:Jacky Liang View a PDF of the paper titled The Novelty Bottleneck: A Framework for Understanding Human Effort Scaling in AI-Assisted Work, by Jacky Liang View PDF HTML (experimental) Abstract:We propose a stylized model of human-AI collaboration that isolates a mechanism we call the novelty bottleneck: the fraction of a task requiring human judgment creates an irreducible serial component analogous to Amdahl's Law in parallel computing. The model assumes that tasks decompose into atomic decisions, a fraction $\nu$ of which are "novel" (not covered by the agent's prior), and that specification, verification, and error correction each scale with task size. From these assumptions, we derive several non-obvious consequences: (1) there is no smooth sublinear regime for human effort it transitions sharply from $O(E)$ to $O(1)$ with no intermediate scaling class; (2) better agents improve the coefficient on human effort but not the exponent; (3) for organizations of n humans with AI agents, optimal team size decreases with agent capability; (4) wall-clock time achieves $O(\sqrt{E})$ through team parallelism but total human effort remains $O(E)$; and (5) the resulting AI safety profile is asymmetric -- AI is bottlenecked on frontier research but unbottlenecked on exploiting existing ...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

Machine Learning

[R] Architecture Determines Optimization: Deriving Weight Updates from Network Topology (seeking arXiv endorsement - cs.LG)

Abstract: We derive neural network weight updates from first principles without assuming gradient descent or a specific loss function. St...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] ML project (XGBoost + Databricks + MLflow) — how to talk about “production issues” in interviews?

Hey all, I recently built an end-to-end fraud detection project using a large banking dataset: Trained an XGBoost model Used Databricks f...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] The memory chip market lost tens of billions over a paper this community would have understood in 10 minutes

TurboQuant was teased recently and tens of billions gone from memory chip market in 48 hours but anyone in this community who read the pa...

Reddit - Machine Learning · 1 min ·
Copilot is ‘for entertainment purposes only,’ according to Microsoft’s terms of use | TechCrunch
Machine Learning

Copilot is ‘for entertainment purposes only,’ according to Microsoft’s terms of use | TechCrunch

AI skeptics aren’t the only ones warning users not to unthinkingly trust models’ outputs — that’s what the AI companies say themselves in...

TechCrunch - AI · 3 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime