Make LLM Fine-tuning 2x faster with Unsloth and 🤗 TRL
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Make LLM Fine-tuning 2x faster with Unsloth and 🤗 TRL Published January 10, 2024 Update on GitHub Upvote 75 +69 Daniel (Unsloth) danielhanchen Follow guest Pulling your hair out because LLM fine-tuning is taking forever? In this post, we introduce a lightweight tool developed by the community to make LLM fine-tuning go super fast! Before diving into Unsloth, it may be helpful to read our QLoRA blog post, or be familiar with LLM fine-tuning using the 🤗 PEFT library. Unsloth - 2x faster, -40% memory usage, 0% accuracy degradation Unsloth is a lightweight library for faster LLM fine-tuning which is fully compatible with the Hugging Face ecosystem (Hub, transformers, PEFT, TRL). The library is actively developed by the Unsloth team (Daniel and Michael) and the open source community. The library supports most NVIDIA GPUs –from GTX 1070 all the way up to H100s–, and can be used with the entire trainer suite from the TRL library (SFTTrainer, DPOTrainer, PPOTrainer). At the time of writing, Unsloth supports the Llama (CodeLlama, Yi, etc) and Mistral architectures. Unsloth works by overwriting some parts of the modeling code with optimized operations. By manually deriving backpropagation steps and rewriting all Pytorch modules into Triton kernels, Unsloth can both reduce memory usage and make fine-tuning faster. Crucially, accuracy degradation is 0% with respect to normal QLoRA, because no approximations are made in the optimized code. Benchmarking 1 A100 40GB Data...