[2603.02597] GPUTOK: GPU Accelerated Byte Level BPE Tokenization
About this article
Abstract page for arXiv paper 2603.02597: GPUTOK: GPU Accelerated Byte Level BPE Tokenization
Computer Science > Computation and Language arXiv:2603.02597 (cs) [Submitted on 3 Mar 2026] Title:GPUTOK: GPU Accelerated Byte Level BPE Tokenization Authors:Venu Gopal Kadamba, Kanishkha Jaisankar View a PDF of the paper titled GPUTOK: GPU Accelerated Byte Level BPE Tokenization, by Venu Gopal Kadamba and 1 other authors View PDF HTML (experimental) Abstract:As large language models move toward million-token context windows, CPU tokenizers become a major slowdown because they process text one step at a time while powerful GPUs sit unused. We built a GPU-based byte-level BPE tokenizer that follows GPT-2's merge rules. It includes a basic BlockBPE-style kernel and a faster, optimized version that uses cuCollections static map, CUB reductions, and a pybind11 interface for Python. On WikiText103 sequences up to 131k tokens, the optimized GPU tokenizer produces the same tokens as a CPU version and, for the longest inputs, is about 1.7x faster than tiktoken and about 7.6x faster than the HuggingFace GPT-2 tokenizer. Nsight profiling shows that 70-80% of CUDA API time goes to memory allocation, so adding memory pooling should give the biggest speed boost next. Tests on generation tasks using WikiText103 prompts show that our GPU tokenizer's outputs stay within about one percentage point of tiktoken and HuggingFace GPT-2 on similarity and overlap metrics, meaning it keeps output quality while making long-context inference more practical. Subjects: Computation and Language (cs.CL)...