[2602.23349] FlashOptim: Optimizers for Memory Efficient Training
Summary
FlashOptim introduces innovative optimizers that significantly reduce memory usage in neural network training, enhancing efficiency without sacrificing model quality.
Why It Matters
As machine learning models grow in size, the demand for memory-efficient training methods becomes critical. FlashOptim addresses this challenge by reducing memory requirements by over 50%, enabling researchers with limited resources to train large models effectively. This advancement could democratize access to cutting-edge AI technologies.
Key Takeaways
- FlashOptim reduces per-parameter memory from 16 bytes to as low as 5 bytes.
- The method maintains model quality across various benchmarks.
- It introduces master weight splitting and companding functions for optimization.
- Significantly lowers model checkpoint sizes, enhancing storage efficiency.
- Applicable to popular optimizers like SGD, AdamW, and Lion.
Computer Science > Machine Learning arXiv:2602.23349 (cs) [Submitted on 26 Feb 2026] Title:FlashOptim: Optimizers for Memory Efficient Training Authors:Jose Javier Gonzalez Ortiz, Abhay Gupta, Chris Renard, Davis Blalock View a PDF of the paper titled FlashOptim: Optimizers for Memory Efficient Training, by Jose Javier Gonzalez Ortiz and 3 other authors View PDF HTML (experimental) Abstract:Standard mixed-precision training of neural networks requires many bytes of accelerator memory for each model parameter. These bytes reflect not just the parameter itself, but also its gradient and one or more optimizer state variables. With each of these values typically requiring 4 bytes, training even a 7 billion parameter model can be impractical for researchers with less than 100GB of accelerator memory. We introduce FlashOptim, a suite of optimizations that reduces per-parameter memory by over 50% while preserving model quality and API compatibility. Our approach introduces two key techniques. First, we improve master weight splitting by finding and exploiting a tight bound on its quantization error. Second, we design companding functions that greatly reduce the error in 8-bit optimizer state quantization. Together with 16-bit gradients, these techniques reduce AdamW memory from 16 bytes to 7 bytes per parameter, or 5 bytes with gradient release. They also cut model checkpoint sizes by more than half. Experiments with FlashOptim applied to SGD, AdamW, and Lion show no measurable q...