[2604.06603] Scientific Knowledge-driven Decoding Constraints Improving the Reliability of LLMs
About this article
Abstract page for arXiv paper 2604.06603: Scientific Knowledge-driven Decoding Constraints Improving the Reliability of LLMs
Computer Science > Computation and Language arXiv:2604.06603 (cs) [Submitted on 8 Apr 2026] Title:Scientific Knowledge-driven Decoding Constraints Improving the Reliability of LLMs Authors:Maotian Ma, Zheni Zeng, Zhenghao Liu, Yukun Yan View a PDF of the paper titled Scientific Knowledge-driven Decoding Constraints Improving the Reliability of LLMs, by Maotian Ma and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have shown strong knowledge reserves and task-solving capabilities, but still face the challenge of severe hallucination, hindering their practical application. Though scientific theories and rules can efficiently direct the behaviors of human manipulators, LLMs still do not utilize these highly-condensed knowledge sufficiently through training or prompting. To address this issue, we propose \textbf{SciDC}, an LLM generation method that integrate subject-specific knowledge with strong constraints. By adopting strong LLMs to automatically convert flexible knowledge into multi-layered, standardized rules, we build an extensible framework to effectively constrain the model generation on domain tasks. Experiments on scientific tasks including industrial formulation design, clinical tumor diagnosis and retrosynthesis planning, consistently demonstrate the effectiveness of our method, achieving a 12\% accuracy improvement on average compared with vanilla generation. We further discuss the potential of LLMs in automatically inductively...