[2602.23315] Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction
Summary
This article presents a novel approach to reducing epistemic uncertainty in AI models through invariant transformation and resampling techniques, enhancing inference accuracy.
Why It Matters
As AI models continue to evolve, understanding and mitigating uncertainties is critical for their reliability. This research offers a promising method to improve model performance, which is essential for applications in various fields, including healthcare, finance, and autonomous systems.
Key Takeaways
- The study identifies the impact of epistemic uncertainty on AI inference errors.
- It proposes a resampling method using invariant transformations to enhance accuracy.
- The approach balances model size and performance, making it practical for real-world applications.
- Partial independence of inference errors can be leveraged for improved results.
- This method could lead to more robust AI systems across various industries.
Computer Science > Artificial Intelligence arXiv:2602.23315 (cs) [Submitted on 26 Feb 2026] Title:Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction Authors:Sha Hu View a PDF of the paper titled Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction, by Sha Hu View PDF HTML (experimental) Abstract:An artificial intelligence (AI) model can be viewed as a function that maps inputs to outputs in high-dimensional spaces. Once designed and well trained, the AI model is applied for inference. However, even optimized AI models can produce inference errors due to aleatoric and epistemic uncertainties. Interestingly, we observed that when inferring multiple samples based on invariant transformations of an input, inference errors can show partial independences due to epistemic uncertainty. Leveraging this insight, we propose a "resampling" based inferencing that applies to a trained AI model with multiple transformed versions of an input, and aggregates inference outputs to a more accurate result. This approach has the potential to improve inference accuracy and offers a strategy for balancing model size and performance. Comments: Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2602.23315 [cs.AI] (or arXiv:2602.23315v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2602.23315 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Sha Hu [view email] [v1] Thu, 26 Feb 20...