[2602.17493] Learning with Boolean threshold functions

[2602.17493] Learning with Boolean threshold functions

arXiv - AI 4 min read Article

Summary

This article presents a novel method for training neural networks on Boolean data using Boolean threshold functions (BTF), demonstrating improved performance in various tasks compared to standard gradient-based methods.

Why It Matters

The research introduces a new approach to neural network training that emphasizes constraint satisfaction over traditional loss minimization, potentially leading to more interpretable and efficient models in discrete systems. This is particularly relevant for applications requiring logical reasoning and sparse representations.

Key Takeaways

  • Proposes a method using Boolean threshold functions for neural networks.
  • Replaces loss minimization with a nonconvex constraint formulation.
  • Achieves strong generalization in tasks where gradient methods struggle.
  • Utilizes the reflect-reflect-relax (RRR) projection algorithm for training.
  • Demonstrates implications for interpretability and efficient inference.

Computer Science > Machine Learning arXiv:2602.17493 (cs) [Submitted on 19 Feb 2026] Title:Learning with Boolean threshold functions Authors:Veit Elser, Manish Krishan Lal View a PDF of the paper titled Learning with Boolean threshold functions, by Veit Elser and Manish Krishan Lal View PDF HTML (experimental) Abstract:We develop a method for training neural networks on Boolean data in which the values at all nodes are strictly $\pm 1$, and the resulting models are typically equivalent to networks whose nonzero weights are also $\pm 1$. The method replaces loss minimization with a nonconvex constraint formulation. Each node implements a Boolean threshold function (BTF), and training is expressed through a divide-and-concur decomposition into two complementary constraints: one enforces local BTF consistency between inputs, weights, and output; the other imposes architectural concurrence, equating neuron outputs with downstream inputs and enforcing weight equality across training-data instantiations of the network. The reflect-reflect-relax (RRR) projection algorithm is used to reconcile these constraints. Each BTF constraint includes a lower bound on the margin. When this bound is sufficiently large, the learned representations are provably sparse and equivalent to networks composed of simple logical gates with $\pm 1$ weights. Across a range of tasks -- including multiplier-circuit discovery, binary autoencoding, logic-network inference, and cellular automata learning -- t...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Machine Learning

[P] SpeakFlow - AI Dialogue Practice Coach with GLM 5.1

Built SpeakFlow for the Z.AI Builder Series hackathon. AI dialogue practice coach that evaluates your spoken responses in real-time. Two ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] ICML Anonymized git repos for rebuttal

A number of the papers I'm reviewing for have submitted additional figures and code through anonymized git repos (e.g. https://anonymous....

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime