[2603.24917] Estimating near-verbatim extraction risk in language models with decoding-constrained beam search
About this article
Abstract page for arXiv paper 2603.24917: Estimating near-verbatim extraction risk in language models with decoding-constrained beam search
Computer Science > Computation and Language arXiv:2603.24917 (cs) [Submitted on 26 Mar 2026] Title:Estimating near-verbatim extraction risk in language models with decoding-constrained beam search Authors:A. Feder Cooper, Mark A. Lemley, Christopher De Sa, Lea Duesterwald, Allison Casasola, Jamie Hayes, Katherine Lee, Daniel E. Ho, Percy Liang View a PDF of the paper titled Estimating near-verbatim extraction risk in language models with decoding-constrained beam search, by A. Feder Cooper and Mark A. Lemley and Christopher De Sa and Lea Duesterwald and Allison Casasola and Jamie Hayes and Katherine Lee and Daniel E. Ho and Percy Liang View PDF HTML (experimental) Abstract:Recent work shows that standard greedy-decoding extraction methods for quantifying memorization in LLMs miss how extraction risk varies across sequences. Probabilistic extraction -- computing the probability of generating a target suffix given a prefix under a decoding scheme -- addresses this, but is tractable only for verbatim memorization, missing near-verbatim instances that pose similar privacy and copyright risks. Quantifying near-verbatim extraction risk is expensive: the set of near-verbatim suffixes is combinatorially large, and reliable Monte Carlo (MC) estimation can require ~100,000 samples per sequence. To mitigate this cost, we introduce decoding-constrained beam search, which yields deterministic lower bounds on near-verbatim extraction risk at a cost comparable to ~20 MC samples per seque...