[2603.27438] The Novelty Bottleneck: A Framework for Understanding Human Effort Scaling in AI-Assisted Work
About this article
Abstract page for arXiv paper 2603.27438: The Novelty Bottleneck: A Framework for Understanding Human Effort Scaling in AI-Assisted Work
Computer Science > Artificial Intelligence arXiv:2603.27438 (cs) [Submitted on 28 Mar 2026] Title:The Novelty Bottleneck: A Framework for Understanding Human Effort Scaling in AI-Assisted Work Authors:Jacky Liang View a PDF of the paper titled The Novelty Bottleneck: A Framework for Understanding Human Effort Scaling in AI-Assisted Work, by Jacky Liang View PDF HTML (experimental) Abstract:We propose a stylized model of human-AI collaboration that isolates a mechanism we call the novelty bottleneck: the fraction of a task requiring human judgment creates an irreducible serial component analogous to Amdahl's Law in parallel computing. The model assumes that tasks decompose into atomic decisions, a fraction $\nu$ of which are "novel" (not covered by the agent's prior), and that specification, verification, and error correction each scale with task size. From these assumptions, we derive several non-obvious consequences: (1) there is no smooth sublinear regime for human effort it transitions sharply from $O(E)$ to $O(1)$ with no intermediate scaling class; (2) better agents improve the coefficient on human effort but not the exponent; (3) for organizations of n humans with AI agents, optimal team size decreases with agent capability; (4) wall-clock time achieves $O(\sqrt{E})$ through team parallelism but total human effort remains $O(E)$; and (5) the resulting AI safety profile is asymmetric -- AI is bottlenecked on frontier research but unbottlenecked on exploiting existing ...