[2602.13506] $γ$-weakly $θ$-up-concavity: Linearizable Non-Convex Optimization with Applications to DR-Submodular and OSS Functions
Summary
The paper introduces $γ$-weakly $θ$-up-concavity, a new condition for optimizing monotone non-convex functions, enhancing approximation guarantees for various optimization problems.
Why It Matters
This research addresses a significant challenge in machine learning and combinatorial optimization by providing a unified framework for non-convex functions. The findings can improve algorithms for DR-submodular and OSS functions, with implications for both offline and online optimization strategies.
Key Takeaways
- Introduces a novel first-order condition for non-convex optimization.
- Establishes upper-linearizability for $γ$-weakly $θ$-up-concave functions.
- Provides unified approximation guarantees for a range of optimization problems.
- Improves existing approximation coefficients for OSS optimization.
- Demonstrates applications in both offline and online optimization settings.
Computer Science > Machine Learning arXiv:2602.13506 (cs) [Submitted on 13 Feb 2026] Title:$γ$-weakly $θ$-up-concavity: Linearizable Non-Convex Optimization with Applications to DR-Submodular and OSS Functions Authors:Mohammad Pedramfar, Vaneet Aggarwal View a PDF of the paper titled $\gamma$-weakly $\theta$-up-concavity: Linearizable Non-Convex Optimization with Applications to DR-Submodular and OSS Functions, by Mohammad Pedramfar and Vaneet Aggarwal View PDF Abstract:Optimizing monotone non-convex functions is a fundamental challenge across machine learning and combinatorial optimization. We introduce and study $\gamma$-weakly $\theta$-up-concavity, a novel first-order condition that characterizes a broad class of such functions. This condition provides a powerful unifying framework, strictly generalizing both DR-submodular functions and One-Sided Smooth (OSS) functions. Our central theoretical contribution demonstrates that $\gamma$-weakly $\theta$-up-concave functions are upper-linearizable: for any feasible point, we can construct a linear surrogate whose gains provably approximate the original non-linear objective. This approximation holds up to a constant factor, namely the approximation coefficient, dependent solely on $\gamma$, $\theta$, and the geometry of the feasible set. This linearizability yields immediate and unified approximation guarantees for a wide range of problems. Specifically, we obtain unified approximation guarantees for offline optimization as w...