[2510.19698] RLIE: Rule Generation with Logistic Regression, Iterative Refinement, and Evaluation for Large Language Models
Summary
The paper presents RLIE, a framework that integrates large language models (LLMs) with probabilistic rule learning to enhance rule generation and evaluation processes.
Why It Matters
This research addresses the limitations of LLMs in rule generation by coupling them with logistic regression and iterative refinement, providing a more robust method for inductive reasoning. It highlights the potential of neuro-symbolic reasoning in AI, which is crucial for advancing AI applications in various fields.
Key Takeaways
- RLIE combines LLMs with probabilistic modeling for rule generation.
- The framework includes four stages: rule generation, logistic regression, iterative refinement, and evaluation.
- Direct application of learned rules improves performance compared to LLMs alone.
- LLMs excel at semantic tasks but struggle with precise probabilistic integration.
- The study enhances understanding of neuro-symbolic reasoning in AI.
Computer Science > Artificial Intelligence arXiv:2510.19698 (cs) [Submitted on 22 Oct 2025 (v1), last revised 13 Feb 2026 (this version, v2)] Title:RLIE: Rule Generation with Logistic Regression, Iterative Refinement, and Evaluation for Large Language Models Authors:Yang Yang, Hua XU, Zhangyi Hu, Yutao Yue View a PDF of the paper titled RLIE: Rule Generation with Logistic Regression, Iterative Refinement, and Evaluation for Large Language Models, by Yang Yang and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) can propose rules in natural language, sidestepping the need for a predefined predicate space in traditional rule learning. Yet many LLM-based approaches ignore interactions among rules, and the opportunity to couple LLMs with probabilistic rule learning for robust inference remains underexplored. We present RLIE, a unified framework that integrates LLMs with probabilistic modeling to learn a set of weighted rules. RLIE has four stages: (1) Rule generation, where an LLM proposes and filters candidates; (2) Logistic regression, which learns probabilistic weights for global selection and calibration; (3) Iterative refinement, which updates the rule set using prediction errors; and (4) Evaluation, which compares the weighted rule set as a direct classifier with methods that inject rules into an LLM. We evaluate multiple inference strategies on real-world datasets. Applying rules directly with their learned weights yields superior perfo...