[2603.23575] APreQEL: Adaptive Mixed Precision Quantization For Edge LLMs
About this article
Abstract page for arXiv paper 2603.23575: APreQEL: Adaptive Mixed Precision Quantization For Edge LLMs
Computer Science > Machine Learning arXiv:2603.23575 (cs) [Submitted on 24 Mar 2026] Title:APreQEL: Adaptive Mixed Precision Quantization For Edge LLMs Authors:Meriem Bouzouad, Yuan-Hao Chang, Jalil Boukhobza View a PDF of the paper titled APreQEL: Adaptive Mixed Precision Quantization For Edge LLMs, by Meriem Bouzouad and 2 other authors View PDF HTML (experimental) Abstract:Today, large language models have demonstrated their strengths in various tasks ranging from reasoning, code generation, and complex problem solving. However, this advancement comes with a high computational cost and memory requirements, making it challenging to deploy these models on edge devices to ensure real-time responses and data privacy. Quantization is one common approach to reducing memory use, but most methods apply it uniformly across all layers. This does not account for the fact that different layers may respond differently to reduced precision. Importantly, memory consumption and computational throughput are not necessarily aligned, further complicating deployment decisions. This paper proposes an adaptive mixed precision quantization mechanism that balances memory, latency, and accuracy in edge deployment under user-defined priorities. This is achieved by analyzing the layer-wise contribution and by inferring how different quantization types behave across the target hardware platform in order to assign the most suitable quantization type to each layer. This integration ensures that laye...