[2602.13583] Differentiable Rule Induction from Raw Sequence Inputs

[2602.13583] Differentiable Rule Induction from Raw Sequence Inputs

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a novel approach to differentiable rule induction from raw sequence inputs, enhancing interpretability in machine learning by integrating self-supervised clustering with inductive logic programming.

Why It Matters

The research addresses a critical challenge in machine learning: the ability to learn rules from raw data without explicit supervision. This advancement could significantly improve the robustness and scalability of rule-based models, making them more applicable in real-world scenarios where labeled data is scarce.

Key Takeaways

  • Introduces a method for rule induction from raw data using differentiable ILP.
  • Combines self-supervised clustering with inductive logic programming to avoid label leakage.
  • Demonstrates effectiveness in learning generalized rules from time series and image data.

Computer Science > Artificial Intelligence arXiv:2602.13583 (cs) [Submitted on 14 Feb 2026] Title:Differentiable Rule Induction from Raw Sequence Inputs Authors:Kun Gao, Katsumi Inoue, Yongzhi Cao, Hanpin Wang, Feng Yang View a PDF of the paper titled Differentiable Rule Induction from Raw Sequence Inputs, by Kun Gao and 4 other authors View PDF HTML (experimental) Abstract:Rule learning-based models are widely used in highly interpretable scenarios due to their transparent structures. Inductive logic programming (ILP), a form of machine learning, induces rules from facts while maintaining interpretability. Differentiable ILP models enhance this process by leveraging neural networks to improve robustness and scalability. However, most differentiable ILP methods rely on symbolic datasets, facing challenges when learning directly from raw data. Specifically, they struggle with explicit label leakage: The inability to map continuous inputs to symbolic variables without explicit supervision of input feature labels. In this work, we address this issue by integrating a self-supervised differentiable clustering model with a novel differentiable ILP model, enabling rule learning from raw data without explicit label leakage. The learned rules effectively describe raw data through its features. We demonstrate that our method intuitively and precisely learns generalized rules from time series and image data. Comments: Subjects: Artificial Intelligence (cs.AI); Machine Learning (cs.LG...

Related Articles

Llms

[D] How's MLX and jax/ pytorch on MacBooks these days?

​ So I'm looking at buying a new 14 inch MacBook pro with m5 pro and 64 gb of memory vs m4 max with same specs. My priorities are pro sof...

Reddit - Machine Learning · 1 min ·
Llms

[R] 94.42% on BANKING77 Official Test Split with Lightweight Embedding + Example Reranking (strict full-train protocol)

BANKING77 (77 fine-grained banking intents) is a well-established but increasingly saturated intent classification benchmark. did this wh...

Reddit - Machine Learning · 1 min ·
As Meta Flounders, It Reportedly Plans to Open Source Its New AI Models
Machine Learning

As Meta Flounders, It Reportedly Plans to Open Source Its New AI Models

At least if it sucks, everyone will be able to see why.

AI Tools & Products · 5 min ·
Google quietly launched an AI dictation app that works offline
Machine Learning

Google quietly launched an AI dictation app that works offline

Google's new offline-first dictation app uses Gemma AI models to take on the apps like Wispr Flow.

TechCrunch - AI · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime