[2602.21196] Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking
Summary
The paper presents UPipe, a novel technique for memory-efficient context parallelism in Transformer models, achieving significant reductions in activation memory usage while maintaining training speed.
Why It Matters
As Transformer models grow in complexity and application, efficient memory usage becomes critical for processing long sequences. UPipe addresses this challenge by enabling longer context lengths without sacrificing performance, which is essential for advancing NLP capabilities.
Key Takeaways
- UPipe reduces activation memory usage by up to 87.5% for 32B Transformers.
- The technique supports context lengths of up to 5M tokens, surpassing previous methods by over 25%.
- Maintains training speed comparable to existing context parallelism techniques.
- Focuses on fine-grained chunking at the attention head level for efficiency.
- Addresses the limitations of current methods like Ring Attention and DeepSpeed Ulysses.
Computer Science > Machine Learning arXiv:2602.21196 (cs) [Submitted on 24 Feb 2026] Title:Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking Authors:Ravi Ghadia, Maksim Abraham, Sergei Vorobyov, Max Ryabinin View a PDF of the paper titled Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking, by Ravi Ghadia and 3 other authors View PDF HTML (experimental) Abstract:Efficiently processing long sequences with Transformer models usually requires splitting the computations across accelerators via context parallelism. The dominant approaches in this family of methods, such as Ring Attention or DeepSpeed Ulysses, enable scaling over the context dimension but do not focus on memory efficiency, which limits the sequence lengths they can support. More advanced techniques, such as Fully Pipelined Distributed Transformer or activation offloading, can further extend the possible context length at the cost of training throughput. In this paper, we present UPipe, a simple yet effective context parallelism technique that performs fine-grained chunking at the attention head level. This technique significantly reduces the activation memory usage of self-attention, breaking the activation memory barrier and unlocking much longer context lengths. Our approach reduces intermediate tensor memory usage in the attention layer by as much as 87.5$\%$ for 32B Transformers, while matching previous context parallelism techniques in terms of training speed....