[2512.04189] BEP: A Binary Error Propagation Algorithm for Binary Neural Networks Training

[2512.04189] BEP: A Binary Error Propagation Algorithm for Binary Neural Networks Training

arXiv - AI 4 min read Article

Summary

The paper presents BEP, a novel Binary Error Propagation algorithm for training Binary Neural Networks (BNNs) that enables efficient backpropagation using only binary operations.

Why It Matters

BEP addresses the challenges of training BNNs, which are crucial for resource-constrained environments. By allowing end-to-end binary training, it enhances the performance of BNNs, making them more viable for practical applications in AI and machine learning.

Key Takeaways

  • BEP is the first algorithm to enable backpropagation in Binary Neural Networks using only binary operations.
  • The algorithm improves test accuracy by up to +10.57% in recurrent neural networks.
  • BEP facilitates end-to-end training for multi-layer architectures, enhancing the efficiency of BNNs.
  • The approach eliminates the need for floating-point arithmetic during training, preserving computational efficiency.
  • BEP is released as an open-source repository, promoting accessibility and further research.

Computer Science > Machine Learning arXiv:2512.04189 (cs) [Submitted on 3 Dec 2025 (v1), last revised 17 Feb 2026 (this version, v2)] Title:BEP: A Binary Error Propagation Algorithm for Binary Neural Networks Training Authors:Luca Colombo, Fabrizio Pittorino, Daniele Zambon, Carlo Baldassi, Manuel Roveri, Cesare Alippi View a PDF of the paper titled BEP: A Binary Error Propagation Algorithm for Binary Neural Networks Training, by Luca Colombo and 5 other authors View PDF HTML (experimental) Abstract:Binary Neural Networks (BNNs), which constrain both weights and activations to binary values, offer substantial reductions in computational complexity, memory footprint, and energy consumption. These advantages make them particularly well suited for deployment on resource-constrained devices. However, training BNNs via gradient-based optimization remains challenging due to the discrete nature of their variables. The dominant approach, quantization-aware training, circumvents this issue by employing surrogate gradients. Yet, this method requires maintaining latent full-precision parameters and performing the backward pass with floating-point arithmetic, thereby forfeiting the efficiency of binary operations during training. While alternative approaches based on local learning rules exist, they are unsuitable for global credit assignment and for back-propagating errors in multi-layer architectures. This paper introduces Binary Error Propagation (BEP), the first learning algorithm...

Related Articles

Machine Learning

[P] I tested Meta’s brain-response model on posts. It predicted the Elon one almost perfectly.

I built an experimental UI and visualization layer around Meta’s open brain-response model just to see whether this stuff actually works ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] I trained an AI to play Resident Evil 4 Remake using Behavioral Cloning + LSTM

I recorded gameplay trajectories in RE4's village — running, shooting, reloading, dodging — and used Behavioral Cloning to train a model ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Why does it seem like open source materials on ML are incomplete? this is not enough...

Many times when I try to deeply understand a topic in machine learning — whether it's a new architecture, a quantization method, a full t...

Reddit - Machine Learning · 1 min ·
Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime