[2509.25411] Boolean Satisfiability via Imitation Learning

[2509.25411] Boolean Satisfiability via Imitation Learning

arXiv - Machine Learning 3 min read Article

Summary

The paper introduces ImitSAT, a novel branching policy for CDCL solvers in Boolean satisfiability, leveraging imitation learning to enhance decision-making efficiency.

Why It Matters

This research addresses the computational challenges in solving Boolean satisfiability problems, which are fundamental in various fields such as computer science and artificial intelligence. By improving the efficiency of CDCL solvers, ImitSAT can lead to faster problem-solving capabilities, impacting optimization tasks and algorithm design.

Key Takeaways

  • ImitSAT uses imitation learning to improve CDCL branching policies.
  • The method reduces propagation counts, enhancing runtime efficiency.
  • KeyTrace sequences provide dense decision-level supervision.
  • ImitSAT integrates seamlessly into existing CDCL frameworks.
  • Extensive experiments show ImitSAT outperforms state-of-the-art methods.

Computer Science > Artificial Intelligence arXiv:2509.25411 (cs) [Submitted on 29 Sep 2025 (v1), last revised 21 Feb 2026 (this version, v2)] Title:Boolean Satisfiability via Imitation Learning Authors:Zewei Zhang, Huan Liu, Yuanhao Yu, Jun Chen, Xiangyu Xu View a PDF of the paper titled Boolean Satisfiability via Imitation Learning, by Zewei Zhang and 4 other authors View PDF HTML (experimental) Abstract:We propose ImitSAT, a branching policy for conflict-driven clause learning (CDCL) solvers based on imitation learning for the Boolean satisfiability problem (SAT). Unlike previous methods that predict instance-level signals to improve CDCL branching indirectly, or rely on reinforcement learning and insufficient CDCL information to enhance branching, ImitSAT learns from expert KeyTrace that collapses a full run into the sequence of surviving decisions. Replaying a KeyTrace on the same instance is nearly conflict-free, providing dense decision-level supervision and directly reducing propagations -- the dominant contributor to wall-clock time. This prefix-conditioned supervision enables ImitSAT to reproduce high-quality branches without exploration, yielding faster convergence, stable training, and seamless integration into CDCL. Extensive experiments demonstrate that ImitSAT reduces propagation counts and runtime, outperforming state-of-the-art learned approaches. We released the source code and trained model at this https URL Comments: Subjects: Artificial Intelligence (cs...

Related Articles

Machine Learning

Ml project user give dataset and I give best model [D] [P]

Tl,dr : suggest me a solution to create a ai ml project where user will give his dataset as input and the project should give best model ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML Reviewer Acknowledgement

Hi, I'm a little confused about ICML discussion period Does the period for reviewer acknowledging responses have already ended? One of th...

Reddit - Machine Learning · 1 min ·
Llms

Claude Opus 4.6 API at 40% below Anthropic pricing – try free before you pay anything

Hey everyone I've set up a self-hosted API gateway using [New-API](QuantumNous/new-ap) to manage and distribute Claude Opus 4.6 access ac...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] ICML reviewer making up false claim in acknowledgement, what to do?

In a rebuttal acknowledgement we received, the reviewer made up a claim that our method performs worse than baselines with some hyperpara...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime