[2603.28405] EdgeDiT: Hardware-Aware Diffusion Transformers for Efficient On-Device Image Generation
About this article
Abstract page for arXiv paper 2603.28405: EdgeDiT: Hardware-Aware Diffusion Transformers for Efficient On-Device Image Generation
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.28405 (cs) [Submitted on 30 Mar 2026] Title:EdgeDiT: Hardware-Aware Diffusion Transformers for Efficient On-Device Image Generation Authors:Sravanth Kodavanti, Manjunath Arveti, Sowmya Vajrala, Srinivas Miriyala, Vikram N R View a PDF of the paper titled EdgeDiT: Hardware-Aware Diffusion Transformers for Efficient On-Device Image Generation, by Sravanth Kodavanti and 4 other authors View PDF HTML (experimental) Abstract:Diffusion Transformers (DiT) have established a new state-of-the-art in high-fidelity image synthesis; however, their massive computational complexity and memory requirements hinder local deployment on resource-constrained edge devices. In this paper, we introduce EdgeDiT, a family of hardware-efficient generative transformers specifically engineered for mobile Neural Processing Units (NPUs), such as the Qualcomm Hexagon and Apple Neural Engine (ANE). By leveraging a hardware-aware optimization framework, we systematically identify and prune structural redundancies within the DiT backbone that are particularly taxing for mobile data-flows. Our approach yields a series of lightweight models that achieve a 20-30% reduction in parameters, a 36-46% decrease in FLOPs, and a 1.65-fold reduction in on-device latency without sacrificing the scaling advantages or the expressive capacity of the original transformer architecture. Extensive benchmarking demonstrates that EdgeDiT offers a superior Par...