[2603.19338] DAPA: Distribution Aware Piecewise Activation Functions for On-Device Transformer Inference and Training

[2603.19338] DAPA: Distribution Aware Piecewise Activation Functions for On-Device Transformer Inference and Training

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2603.19338: DAPA: Distribution Aware Piecewise Activation Functions for On-Device Transformer Inference and Training

Computer Science > Machine Learning arXiv:2603.19338 (cs) [Submitted on 19 Mar 2026] Title:DAPA: Distribution Aware Piecewise Activation Functions for On-Device Transformer Inference and Training Authors:Maoyang Xiang, Bo Wang View a PDF of the paper titled DAPA: Distribution Aware Piecewise Activation Functions for On-Device Transformer Inference and Training, by Maoyang Xiang and 1 other authors View PDF HTML (experimental) Abstract:Non-linear activation functions play a pivotal role in on-device inference and training, as they not only consume substantial hardware resources but also impose a significant impact on system performance and energy efficiency. In this work, we propose Distribution-Aware Piecewise Activation (DAPA), a differentiable and hardware-friendly activation function for Transformer architectures by exploiting the distribution of pre-activation data. DAPA employs a non-uniform piecewise approximation that allocates finer segments to high-probability regions of the distribution, improving generalizability over prior piecewise linear methods. The resulting approximation is further quantized using Distribution-Weighted Mean Square Error to reduce latency and resource utilization for hardware deployment. Our HLS implementation demonstrates that DAPA speeds up GELU computation by 16$\times$ and decreases DSP utilization by 16$\times$ while maintaining comparable or better performance across vision Transformers and GPT-2 models. Subjects: Machine Learning (cs...

Originally published on March 23, 2026. Curated by AI News.

Related Articles

Llms

CLI for Google AI Search (gai.google) — run AI-powered code/tech searches headlessly from your terminal

Google AI (gai.google) gives Gemini-powered answers for technical queries — think AI-enhanced search with code understanding. I built a C...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Big increase in the amount of people using AI to write their replies with AI

I find it interesting that we’ve all randomly decided to use the “-“ more often recently on reddit, and everyone’s grammar has drasticall...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] MXFP8 GEMM: Up to 99% of cuBLAS performance using CUDA + PTX

New blog post by Daniel Vega-Myhre (Meta/PyTorch) illustrating GEMM design for FP8, including deep-dives into all the constraints and des...

Reddit - Machine Learning · 1 min ·
IIT Delhi launches 8th batch of Advanced AI, ML, and DL online programme: Check who is eligible, applicat
Machine Learning

IIT Delhi launches 8th batch of Advanced AI, ML, and DL online programme: Check who is eligible, applicat

News News: The Continuing Education Programme (CEP) at IIT Delhi has announced the launch of the 8th batch of its Advanced Certificate Pr...

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime