INT8 quantization gives me better accuracy than FP16 ! [D]
About this article
Hi everyone, I’m working on a deep learning model and I noticed something strange. When I compare different precisions: FP32 (baseline) FP16 , INT8 (post-training quantization) I’m getting better inference accuracy with INT8 than FP16, which I didn’t expect. I thought FP16 should be closer to FP32 and therefore more accurate than INT8, but in my case INT8 is actually performing better. Has anyone seen this before? What could explain INT8 outperforming FP16 in inference? Setup details: Model e...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket