How to train a new language model from scratch using Transformers and Tokenizers
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles How to train a new language model from scratch using Transformers and Tokenizers Published February 14, 2020 Update on GitHub Upvote 58 +52 Julien Chaumond julien-c Follow Over the past few months, we made several improvements to our transformers and tokenizers libraries, with the goal of making it easier than ever to train a new language model from scratch. In this post we’ll demo how to train a “small” model (84 M parameters = 6 layers, 768 hidden size, 12 attention heads) – that’s the same number of layers & heads as DistilBERT – on Esperanto. We’ll then fine-tune the model on a downstream task of part-of-speech tagging. Esperanto is a constructed language with a goal of being easy to learn. We pick it for this demo for several reasons: it is a relatively low-resource language (even though it’s spoken by ~2 million people) so this demo is less boring than training one more English model 😁 its grammar is highly regular (e.g. all common nouns end in -o, all adjectives in -a) so we should get interesting linguistic results even on a small dataset. finally, the overarching goal at the foundation of the language is to bring people closer (fostering world peace and international understanding) which one could argue is aligned with the goal of the NLP community 💚 N.B. You won’t need to understand Esperanto to understand this post, but if you do want to learn it, Duolingo has a nice course with 280k active learners. Our model is going to be called… wait for it…...