by , , ,
Abstract:
DR-submodular continuous functions are important objectives with wide real-world applications spanning MAP inference in determinantal point processes (DPPs), and mean-field inference for probabilistic submodular models, amongst others. DR-submodularity captures a subclass of non-convex functions that enables both exact minimization and approximate maximization in polynomial time. In this work we study the problem of maximizing non-monotone DR-submodular continuous functions under general down-closed convex constraints. We start by investigating several properties that underlie such objectives, which are then used to devise two optimization algorithms with provable guarantees. Concretely, we first devise a "two-phase" algorithm with 1/4 approximation guarantee. This algorithm allows the use of existing methods which are ensured to find (approximate) stationary points as a subroutine; thus, enabling to utilize recent progress in non-convex optimization. Then we present a non-monotone Frank-Wolfe variant with 1/e approximation guarantee and sublinear convergence rate. Finally, we extend our approach to a broader class of generalized DR-submodular continuous functions, which captures a wider spectrum of applications. Our theoretical findings are validated on several synthetic and real-world problem instances.
Reference:
Non-monotone Continuous DR-submodular Maximization: Structure and Algorithms A. Bian, K. Levy, A. Krause, J. M. BuhmannIn Neural Information Processing Systems (NeurIPS), 2017
Bibtex Entry:
@inproceedings{bian17nonmonotone,
	author = {An Bian and Kfir Levy and Andreas Krause and Joachim M. Buhmann},
	booktitle = {Neural Information Processing Systems (NeurIPS)},
	month = {December},
	title = {Non-monotone Continuous DR-submodular Maximization: Structure and Algorithms},
	year = {2017}}