[2603.02271] Characterizing VLA Models: Identifying the Action Generation Bottleneck for Edge AI Architectures
About this article
Abstract page for arXiv paper 2603.02271: Characterizing VLA Models: Identifying the Action Generation Bottleneck for Edge AI Architectures
Computer Science > Performance arXiv:2603.02271 (cs) [Submitted on 1 Mar 2026] Title:Characterizing VLA Models: Identifying the Action Generation Bottleneck for Edge AI Architectures Authors:Manoj Vishwanathan, Suvinay Subramanian, Anand Raghunathan View a PDF of the paper titled Characterizing VLA Models: Identifying the Action Generation Bottleneck for Edge AI Architectures, by Manoj Vishwanathan and 2 other authors View PDF HTML (experimental) Abstract:Vision-Language-Action (VLA) models are an emerging class of workloads critical for robotics and embodied AI at the edge. As these models scale, they demonstrate significant capability gains, yet they must be deployed locally to meet the strict latency requirements of real-time applications. This paper characterizes VLA performance on two generations of edge hardware, viz. the Nvidia Jetson Orin and Thor platforms. Using MolmoAct-7B, a state-of-the-art VLA model, we identify a primary execution bottleneck: up to 75% of end-to-end latency is consumed by the memory-bound action-generation phase. Through analytical modeling and simulations, we project the hardware requirements for scaling to 100B parameter models. We also explore the impact of high-bandwidth memory technologies and processing-in-memory (PIM) as promising future pathways in edge systems for embodied AI. Comments: Subjects: Performance (cs.PF); Artificial Intelligence (cs.AI); Hardware Architecture (cs.AR); Robotics (cs.RO) Cite as: arXiv:2603.02271 [cs.PF] ...