[2509.03345] Do Language Models Follow Occam's Razor? An Evaluation of Parsimony in Inductive and Abductive Reasoning
About this article
Abstract page for arXiv paper 2509.03345: Do Language Models Follow Occam's Razor? An Evaluation of Parsimony in Inductive and Abductive Reasoning
Computer Science > Artificial Intelligence arXiv:2509.03345 (cs) [Submitted on 3 Sep 2025 (v1), last revised 26 Mar 2026 (this version, v2)] Title:Do Language Models Follow Occam's Razor? An Evaluation of Parsimony in Inductive and Abductive Reasoning Authors:Yunxin Sun, Abulhair Saparov View a PDF of the paper titled Do Language Models Follow Occam's Razor? An Evaluation of Parsimony in Inductive and Abductive Reasoning, by Yunxin Sun and 1 other authors View PDF HTML (experimental) Abstract:Non-deductive reasoning, encompassing inductive and abductive reasoning, is essential in addressing complex real-world questions. One key feature of inductive and abductive reasoning is that there are many valid hypotheses; the simplest ones (those that adhere to Occam's Razor) are often most useful. However, this aspect is ignored in recent work that evaluates the non-deductive reasoning capabilities of large language models (LLMs). This work fills this gap, focusing on understanding whether the inductive and abductive reasoning capabilities of LLMs adhere to Occam's Razor, while also examining the correctness of their reasoning. To accomplish this goal, we introduce a framework to synthetically generate reasoning questions that (a) require inductive reasoning and abductive reasoning simultaneously; (b) is readily extended to produce any abductive/inductive reasoning question expressible in first-order logic. The task for the intelligent agent is to produce hypotheses to explain obse...