[2604.02343] Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains
About this article
Abstract page for arXiv paper 2604.02343: Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains
Computer Science > Machine Learning arXiv:2604.02343 (cs) [Submitted on 9 Feb 2026] Title:Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains Authors:Roy Rinberg, Annabelle Michael Carrell, Simon Henniger, Nicholas Carlini, Keri Warr View a PDF of the paper titled Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains, by Roy Rinberg and 4 other authors View PDF HTML (experimental) Abstract:We study the compression of LLM-generated text across lossless and lossy regimes, characterizing a compression-compute frontier where more compression is possible at the cost of more compute. For lossless compression, domain-adapted LoRA adapters can improve LLM-based arithmetic coding by 2x over compression with the base LLM alone. For lossy compression, prompting a model for a succinct rewrite then applying arithmetic coding can achieve compression ratios of approximately 0.03, a 2x improvement over compressing the original response. We further introduce Question-Asking compression (QA), an interactive lossy protocol inspired by the game 'Twenty Questions'. A small model iteratively refines its response by asking yes/no questions to a stronger model, transferring exactly one bit per answer. On 8 benchmarks spanning math, science, and code, 10 binary questions recover 23% to 72% of the capability gap between a small and large model on standard benchmarks and 7% to 38% on harder benchmarks, achieving compression ratios of 0.0006 to 0.004. This is over 100x...