[2512.06393] Conflict-Aware Fusion: Resolving Logic Inertia in Large Language Models via Structured Cognitive Priors
Summary
This article introduces Conflict-Aware Fusion, a framework designed to address Logic Inertia in large language models (LLMs) by integrating structured cognitive priors, enhancing their reasoning capabilities under contradictory conditions.
Why It Matters
As large language models become integral in various applications, ensuring their reliability in reasoning and decision-making is crucial. This research highlights a significant failure mode (Logic Inertia) and proposes a novel solution, which could lead to more robust AI systems capable of handling contradictions effectively.
Key Takeaways
- Logic Inertia in LLMs leads to failures in reasoning under contradictions.
- Conflict-Aware Fusion employs a dual-process architecture for improved reasoning.
- The framework achieves high accuracy even in the presence of contradictory evidence.
- Structured cognitive priors are essential for robust multi-step reasoning.
- This research provides a blueprint for developing more reliable AI systems.
Computer Science > Artificial Intelligence arXiv:2512.06393 (cs) [Submitted on 6 Dec 2025 (v1), last revised 21 Feb 2026 (this version, v3)] Title:Conflict-Aware Fusion: Resolving Logic Inertia in Large Language Models via Structured Cognitive Priors Authors:Qiming Bao, Xiaoxuan Fu, Michael Witbrock View a PDF of the paper titled Conflict-Aware Fusion: Resolving Logic Inertia in Large Language Models via Structured Cognitive Priors, by Qiming Bao and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) excel at many natural language tasks, yet their reasoning reliability under structured perturbations of rule-based systems remains brittle. We present a controlled evaluation framework consisting of four stress tests: (1) rule deletion (redundant vs. essential); (2) contradictory evidence injection; (3) logic-preserving rewrites; and (4) multi-law equivalence stacking. While representative model families (BERT, Qwen2, and TinyLlama) achieve Acc = 1.0000 on base tasks, our framework reveals a critical failure mode termed Logic Inertia - a total breakdown (Acc = 0.0000) under contradictions, where deductive momentum overrides factual reality. To resolve this, we propose Conflict-Aware Fusion, a framework grounded in the Cognitive Structure Hypothesis which posits that robust reasoning requires an explicit structural inductive bias. By imposing a dual-process architecture that separates premise verification from logical deduction, Conflict-Aware Fu...