[2602.16787] Better Think Thrice: Learning to Reason Causally with Double Counterfactual Consistency
Summary
This paper introduces Double Counterfactual Consistency (DCC), a method for evaluating and enhancing causal reasoning in large language models (LLMs) without needing labeled data.
Why It Matters
As LLMs increasingly impact various applications, understanding their causal reasoning capabilities is crucial. DCC provides a novel approach to assess and improve these abilities, potentially leading to more reliable AI systems in decision-making contexts.
Key Takeaways
- DCC measures causal reasoning in LLMs without labeled data.
- It verifies models' abilities in causal intervention and counterfactual prediction.
- DCC can enhance performance on reasoning tasks across different model families.
Computer Science > Machine Learning arXiv:2602.16787 (cs) [Submitted on 18 Feb 2026] Title:Better Think Thrice: Learning to Reason Causally with Double Counterfactual Consistency Authors:Victoria Lin, Xinnuo Xu, Rachel Lawrence, Risa Ueno, Amit Sharma, Javier Gonzalez, Niranjani Prasad View a PDF of the paper titled Better Think Thrice: Learning to Reason Causally with Double Counterfactual Consistency, by Victoria Lin and 6 other authors View PDF Abstract:Despite their strong performance on reasoning benchmarks, large language models (LLMs) have proven brittle when presented with counterfactual questions, suggesting weaknesses in their causal reasoning ability. While recent work has demonstrated that labeled counterfactual tasks can be useful benchmarks of LLMs' causal reasoning, producing such data at the scale required to cover the vast potential space of counterfactuals is limited. In this work, we introduce double counterfactual consistency (DCC), a lightweight inference-time method for measuring and guiding the ability of LLMs to reason causally. Without requiring labeled counterfactual data, DCC verifies a model's ability to execute two important elements of causal reasoning: causal intervention and counterfactual prediction. Using DCC, we evaluate the causal reasoning abilities of various leading LLMs across a range of reasoning tasks and interventions. Moreover, we demonstrate the effectiveness of DCC as a training-free test-time rejection sampling criterion and s...