[2602.14154] A Penalty Approach for Differentiation Through Black-Box Quadratic Programming Solvers

[2602.14154] A Penalty Approach for Differentiation Through Black-Box Quadratic Programming Solvers

arXiv - Machine Learning 3 min read Article

Summary

This paper presents dXPP, a penalty-based framework for differentiating through black-box quadratic programming solvers, improving computational efficiency and robustness over traditional KKT methods.

Why It Matters

Differentiating through quadratic programming is crucial in optimization tasks. The proposed dXPP framework addresses the limitations of existing methods, making it a significant advancement for researchers and practitioners in machine learning and optimization.

Key Takeaways

  • dXPP decouples QP solving from differentiation, enhancing efficiency.
  • The method is solver-agnostic, allowing flexibility in solver choice.
  • Empirical results show dXPP outperforms KKT-based methods in large-scale problems.
  • The approach simplifies the differentiation process, requiring only a smaller linear system.
  • dXPP is applicable to various tasks, including portfolio optimization.

Computer Science > Machine Learning arXiv:2602.14154 (cs) [Submitted on 15 Feb 2026] Title:A Penalty Approach for Differentiation Through Black-Box Quadratic Programming Solvers Authors:Yuxuan Linghu, Zhiyuan Liu, Qi Deng View a PDF of the paper titled A Penalty Approach for Differentiation Through Black-Box Quadratic Programming Solvers, by Yuxuan Linghu and 2 other authors View PDF HTML (experimental) Abstract:Differentiating through the solution of a quadratic program (QP) is a central problem in differentiable optimization. Most existing approaches differentiate through the Karush--Kuhn--Tucker (KKT) system, but their computational cost and numerical robustness can degrade at scale. To address these limitations, we propose dXPP, a penalty-based differentiation framework that decouples QP solving from differentiation. In the solving step (forward pass), dXPP is solver-agnostic and can leverage any black-box QP solver. In the differentiation step (backward pass), we map the solution to a smooth approximate penalty problem and implicitly differentiate through it, requiring only the solution of a much smaller linear system in the primal variables. This approach bypasses the difficulties inherent in explicit KKT differentiation and significantly improves computational efficiency and robustness. We evaluate dXPP on various tasks, including randomly generated QPs, large-scale sparse projection problems, and a real-world multi-period portfolio optimization task. Empirical resu...

Related Articles

Machine Learning

TMLR reviews stalled [D]

I submitted a regular submission (12 pages or less) to TMLR in February that had status change to “under review” 6 weeks ago. TMLR states...

Reddit - Machine Learning · 1 min ·
Top 10 AI certifications and courses for 2026
Ai Startups

Top 10 AI certifications and courses for 2026

This article reviews the top 10 AI certifications and courses for 2026, highlighting their significance in a rapidly evolving field and t...

AI Events · 15 min ·
Machine Learning

Artificial intelligence - Machine Learning, Robotics, Algorithms

AI Events ·
Machine Learning

Looking to join a team working on AI/CV research (aiming to publish) [R]

Hi, I am currently working as a research assistant in my college, but I want to do more serious research and learn more from it. I’m inte...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime