Tokenization in Transformers v5: Simpler, Clearer, and More Modular
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Tokenization in Transformers v5: Simpler, Clearer, and More Modular Published December 18, 2025 Update on GitHub Upvote 119 +113 Ita Zaporozhets itazap Follow Aritra Roy Gosthipaty ariG23498 Follow Arthur Zucker ArthurZ Follow Sergio Paniego sergiopaniego Follow merve merve Follow Pedro Cuenca pcuenq Follow Transformers v5 redesigns how tokenizers work. The big tokenizers reformat separates tokenizer design from trained vocabulary (much like how PyTorch separates neural network architecture from learned weights). The result is tokenizers you can inspect, customize, and train from scratch with far less friction. TL;DR: This blog explains how tokenization works in Transformers and why v5 is a major redesign, with clearer internals, a clean class hierarchy, and a single fast backend. It’s a practical guide for anyone who wants to understand, customize, or train model-specific tokenizers instead of treating them as black boxes. Table of Contents What is Tokenization? The Tokenization Pipeline Tokenization Algorithms Accessing tokenizers through transformers The Tokenizer Class Hierarchy in transformers AutoTokenizer Automatically Selects the Correct Tokenizer Class v5 Separates Tokenizer Architecture from Trained Vocab Summary For experts: If you're already familiar with the concepts and want to understand the changes in v5, go to v5 Separates Tokenizer Architecture from Trained Vocab Before diving into the changes, let's quickly cover what tokenization does a...