[2205.12377] Hardness of Maximum Likelihood Learning of DPPs

[2205.12377] Hardness of Maximum Likelihood Learning of DPPs

arXiv - Machine Learning 4 min read Article

Summary

This article presents a proof of the NP-completeness of the maximum likelihood learning problem for Determinantal Point Processes (DPPs), enhancing the understanding of computational complexity in machine learning.

Why It Matters

Understanding the hardness of maximum likelihood learning for DPPs is crucial as it impacts algorithm design in machine learning applications that require diverse data selection. This work solidifies Kulesza's conjecture and provides a theoretical foundation for future research in computational complexity and probabilistic models.

Key Takeaways

  • Proves the NP-completeness of maximum likelihood learning for DPPs.
  • Enhances the theoretical framework for approximating log-likelihood in DPPs.
  • Demonstrates a reduction from DPPs to the 3-Coloring problem, linking two areas of computational complexity.

Computer Science > Computational Complexity arXiv:2205.12377 (cs) [Submitted on 24 May 2022 (v1), last revised 25 Feb 2026 (this version, v3)] Title:Hardness of Maximum Likelihood Learning of DPPs Authors:Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie View a PDF of the paper titled Hardness of Maximum Likelihood Learning of DPPs, by Elena Grigorescu and 3 other authors View PDF HTML (experimental) Abstract:Determinantal Point Processes (DPPs) are a widely used probabilistic model for negatively correlated sets. DPPs have been successfully employed in Machine Learning applications to select a diverse, yet representative subset of data. In these applications, a set of parameters that maximize the likelihood of the data is typically desirable. The algorithms used for this task to date either optimize over a limited family of DPPs, or use local improvement heuristics that do not provide theoretical guarantees of optimality. n seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2011) that the problem is NP-complete. The lack of a formal proof prompted Brunel et al. (COLT 2017) to suggest that, in opposition to Kulesza's conjecture, there might exist a polynomial-time algorithm for computing a maximum-likelihood DPP. They also presented some preliminary evidence supporting a conjecture that they suggested might lead to such an algorithm. In this work we prove Kulesza's conjecture. In fact, we prove the following stronger hardness of approximat...

Related Articles

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch
Machine Learning

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch

Less than a year after launching, with checks from some of the biggest names in Silicon Valley, crowdsourced AI model feedback startup Yu...

TechCrunch - AI · 4 min ·
Machine Learning

[R] Fine-tuning services report

If you have some data and want to train or run a small custom model but don't have powerful enough hardware for training, fine-tuning ser...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Does ML have a "bible"/reference textbook at the Intermediate/Advanced level?

Hello, everyone! This is my first time posting here and I apologise if the question is, perhaps, a bit too basic for this sub-reddit. A b...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence

A week ago I made a thread asking whether ICML 2026’s review policy might have affected review outcomes, especially whether Policy A pape...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime