[2602.19439] OptiRepair: Closed-Loop Diagnosis and Repair of Supply Chain Optimization Models with LLM Agents

[2602.19439] OptiRepair: Closed-Loop Diagnosis and Repair of Supply Chain Optimization Models with LLM Agents

arXiv - Machine Learning 4 min read Article

Summary

The paper presents OptiRepair, a novel approach using LLM agents for diagnosing and repairing infeasible supply chain optimization models, achieving significant improvements over existing methods.

Why It Matters

Supply chain optimization is critical for operational efficiency, yet many models become infeasible due to errors. OptiRepair addresses this challenge by leveraging AI to automate diagnosis and repair, potentially transforming how organizations manage supply chain issues and enhancing decision-making processes.

Key Takeaways

  • OptiRepair improves the repair of infeasible supply chain models using LLM agents.
  • The approach achieves an 81.7% Rational Recovery Rate, significantly higher than traditional methods.
  • It highlights the need for targeted training and explicit operational rationale in AI applications.

Computer Science > Artificial Intelligence arXiv:2602.19439 (cs) [Submitted on 23 Feb 2026] Title:OptiRepair: Closed-Loop Diagnosis and Repair of Supply Chain Optimization Models with LLM Agents Authors:Ruicheng Ao, David Simchi-Levi, Xinshang Wang View a PDF of the paper titled OptiRepair: Closed-Loop Diagnosis and Repair of Supply Chain Optimization Models with LLM Agents, by Ruicheng Ao and 2 other authors View PDF HTML (experimental) Abstract:Problem Definition. Supply chain optimization models frequently become infeasible because of modeling errors. Diagnosis and repair require scarce OR expertise: analysts must interpret solver diagnostics, trace root causes across echelons, and fix formulations without sacrificing operational soundness. Whether AI agents can perform this task remains untested. Methodology/Results. OptiRepair splits this task into a domain-agnostic feasibility phase (iterative IIS-guided repair of any LP) and a domain-specific validation phase (five rationality checks grounded in inventory theory). We test 22 API models from 7 families on 976 multi-echelon supply chain problems and train two 8B-parameter models using self-taught reasoning with solver-verified rewards. The trained models reach 81.7% Rational Recovery Rate (RRR) -- the fraction of problems resolved to both feasibility and operational rationality -- versus 42.2% for the best API model and 21.3% on average. The gap concentrates in Phase 1 repair: API models average 27.6% recovery rate ve...

Related Articles

Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Llms

I have been coding for 11 years and I caught myself completely unable to debug a problem without AI assistance last month. That scared me more than anything I have seen in this industry.

I want to be honest about something that happened to me because I think it is more common than people admit. Last month I hit a bug in a ...

Reddit - Artificial Intelligence · 1 min ·
Llms

OpenClaw security checklist: practical safeguards for AI agents

Here is one of the better quality guides on the ensuring safety when deploying OpenClaw: https://chatgptguide.ai/openclaw-security-checkl...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime