[2603.10047] Toward Epistemic Stability: Engineering Consistent Procedures for Industrial LLM Hallucination Reduction
About this article
Abstract page for arXiv paper 2603.10047: Toward Epistemic Stability: Engineering Consistent Procedures for Industrial LLM Hallucination Reduction
Computer Science > Software Engineering arXiv:2603.10047 (cs) [Submitted on 8 Mar 2026 (v1), last revised 5 Apr 2026 (this version, v2)] Title:Toward Epistemic Stability: Engineering Consistent Procedures for Industrial LLM Hallucination Reduction Authors:Brian Freeman, Adam Kicklighter, Matt Erdman, Zach Gordon View a PDF of the paper titled Toward Epistemic Stability: Engineering Consistent Procedures for Industrial LLM Hallucination Reduction, by Brian Freeman and Adam Kicklighter and Matt Erdman and Zach Gordon View PDF HTML (experimental) Abstract:Hallucinations in large language models (LLMs) are outputs that are syntactically coherent but factually incorrect or contextually inconsistent. They are persistent obstacles in high-stakes industrial settings such as engineering design, enterprise resource planning, and IoT telemetry platforms. We present and compare five prompt engineering strategies intended to reduce the variance of model outputs and move toward repeatable, grounded results without modifying model weights or creating complex validation models. These methods include: (M1) Iterative Similarity Convergence, (M2) Decomposed Model-Agnostic Prompting, (M3) Single-Task Agent Specialization, (M4) Enhanced Data Registry, and (M5) Domain Glossary Injection. Each method is evaluated against an internal baseline using an LLM-as-Judge framework over 100 repeated runs per method (same fixed task prompt, stochastic decoding at tau = 0.7. Under this evaluation setup, M4...