[2602.18095] Neurosymbolic Language Reasoning as Satisfiability Modulo Theory
Summary
This article presents Logitext, a neurosymbolic language that enhances natural language understanding by integrating large language models with satisfiability modulo theory for improved reasoning capabilities.
Why It Matters
As natural language processing (NLP) evolves, the ability to perform logical reasoning alongside textual comprehension becomes crucial. This research addresses limitations in current models by introducing a framework that allows for better handling of partially structured logical tasks, thereby broadening the applicability of neurosymbolic methods in AI.
Key Takeaways
- Logitext enables the representation of documents as natural language text constraints (NLTCs).
- The integration of LLMs with SMT solving improves both accuracy and coverage in reasoning tasks.
- This research extends neurosymbolic methods beyond fully formalizable domains, addressing more complex natural language scenarios.
Computer Science > Artificial Intelligence arXiv:2602.18095 (cs) [Submitted on 20 Feb 2026] Title:Neurosymbolic Language Reasoning as Satisfiability Modulo Theory Authors:Hyunseok Oh, Sam Stern, Youngki Lee, Matthai Philipose View a PDF of the paper titled Neurosymbolic Language Reasoning as Satisfiability Modulo Theory, by Hyunseok Oh and 3 other authors View PDF HTML (experimental) Abstract:Natural language understanding requires interleaving textual and logical reasoning, yet large language models often fail to perform such reasoning reliably. Existing neurosymbolic systems combine LLMs with solvers but remain limited to fully formalizable tasks such as math or program synthesis, leaving natural documents with only partial logical structure unaddressed. We introduce Logitext, a neurosymbolic language that represents documents as natural language text constraints (NLTCs), making partial logical structure explicit. We develop an algorithm that integrates LLM-based constraint evaluation with satisfiability modulo theory (SMT) solving, enabling joint textual-logical reasoning. Experiments on a new content moderation benchmark, together with LegalBench and Super-Natural Instructions, show that Logitext improves both accuracy and coverage. This work is the first that treats LLM-based reasoning as an SMT theory, extending neurosymbolic methods beyond fully formalizable domains. Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2602.18095 [cs.AI] (or arXiv:2602.18095v1...