[2505.13963] Through a Compressed Lens: Investigating The Impact of Quantization on Factual Knowledge Recall
About this article
Abstract page for arXiv paper 2505.13963: Through a Compressed Lens: Investigating The Impact of Quantization on Factual Knowledge Recall
Computer Science > Computation and Language arXiv:2505.13963 (cs) [Submitted on 20 May 2025 (v1), last revised 29 Apr 2026 (this version, v3)] Title:Through a Compressed Lens: Investigating The Impact of Quantization on Factual Knowledge Recall Authors:Qianli Wang, Mingyang Wang, Nils Feldhus, Simon Ostermann, Yuan Cao, Hinrich Schütze, Sebastian Möller, Vera Schmitt View a PDF of the paper titled Through a Compressed Lens: Investigating The Impact of Quantization on Factual Knowledge Recall, by Qianli Wang and 7 other authors View PDF HTML (experimental) Abstract:Quantization methods are widely used to accelerate inference and streamline the deployment of large language models (LLMs). Although quantization's effects on various LLM capabilities have been extensively studied, one critical area remains underexplored: factual knowledge recall (FKR), the process by which LLMs access stored knowledge. To this end, we conduct comprehensive experiments using three common quantization techniques at distinct bit widths, in conjunction with interpretability-driven analyses on two tasks, knowledge memorization and latent multi-hop reasoning. We show that quantization typically results in information loss within LLMs, consequently diminishing their capacity for FKR. This effect is particularly amplified in smaller models within the same architectural families. However, models quantized at reduced bit precision do not consistently exhibit inferior performance and occasionally quantizat...