Frontiers in Mathematical Sciences


TITLE  
Constrained Sub-Determinant Maximization via Anti-Concentration


SPEAKER  
Javad Ebrahimi Boroojeni
EPFL





ABSTRACT

Several fundamental problems that arise in optimization and computer science can be cast as follows: Given vectors $v_1,\ldots,v_m \in \mathbb{R}^d$ and a constraint family $\mathcal{B} \subseteq 2^{[m]}$, find a set $S \in \mathcal{B}$ that maximizes the squared volume of the simplex spanned by the vectors in $S$. A motivating example is the ubiquitous data-summarization problem in machine learning and information retrieval where one is given a collection of feature vectors that represent data such as documents or images. The volume of a collection of vectors is used as a measure of their diversity , and partition or matroid constraints over $[m]$ are imposed in order to ensure resource or fairness constraints. Even with a simple cardinality constraint $(\mathcal{B}={[m] \choose r})$, the problem becomes NP-hard and has received much attention starting with a result by Khachiyan who gave an $r^{O(r)}$ approximation algorithm for this problem. Recently, Nikolov and Singh presented a convex program and showed how it can be used to estimate the value of the most diverse set when there are multiple cardinality constraints (i.e., when $\mathcal{B}$ corresponds to a partition matroid). Their proof of the integrality gap of the convex program relied on an inequality by Gurvits and was recently extended to regular matroids. The question of whether these estimation algorithms can be converted into the more useful approximation algorithms -- that also output a set -- remained open.
The main contribution of this paper is to give the first approximation algorithms for both partition and regular matroids. We present novel formulations for the sub-determinant maximization problem for these matroids; this reduces them to the problem of finding a point that maximizes the absolute value of a non-convex function over a Cartesian product of probability simplices. The technical core of our results is a new anti-concentration inequality for dependent random variables that arise from these functions which allows us to relate the optimal value of these non-convex functions to their value at a random point. Unlike prior work on the constrained sub-determinant maximization problem, our proofs do not rely on real-stability or convexity and could be of independent interest both in algorithms and complexity where anti-concentration phenomena have recently been deployed.