[P] bitnet-edge: Ternary-weight CNNs ({-1,0,+1}) on MNIST and CIFAR-10, deployed to ESP32-S3 with zero multiplications
About this article
I built a pipeline that takes ternary-quantized CNNs from PyTorch training all the way to bare-metal inference on an ESP32-S3 microcontroller. No ML framework at inference time, just C with add/subtract/skip. Tested on both MNIST and CIFAR-10, with a fully dynamic inference engine that handles arbitrary input shapes without recompilation. The pipeline goes like this: Training is in PyTorch with CUDA. FP32 latent weights with straight-through estimator. AbsMax quantization where w_q = clamp(ro...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket