A couple months ago, at Indiana University, David Fisher, Nets Katz, and I organized a summer school on Analysis and geometry in the theory of computation. This school is one in a series organized by David and funded by NSF grant DMS-0643546 (see, e.g. last year’s school). What follows is a brief synopsis of what the school covered. All the lectures were given by the participants, and there are links to their lecture notes below. This is essentially an extended version of an introductory document I wrote for the participants, who were a mix of mathematicians and theoretical computer scientists.
In the following discussion, we will use the word efficient to describe an algorithm that runs in time polynomial in the size of its input. For a graph , we use to denote the “MAX-CUT value,” i.e. the quantity
where denotes the set of edges between and its complement. It is well-known that computing is -complete, and thus assuming , there is no efficient algorithm that, given , outputs .
Given this state of affairs, it is natural to ask how well we can approximate the value with an efficient algorithm. For an algorithm , we use to denote its output when run on the graph . If satisfies for all , we define its approximation ratio as
Clearly . Now we are interested in the best approximation ratio achievable by an efficient algorithm , i.e. the quantity
It should be clear that similar questions arise for all sorts of other values which are NP-hard to compute (e.g. the chromatic number of a graph, or the length of its shortest tour, or the length of the longest simple path, etc.) An algorithm of Goemans and Williamson (based on a form of convex optimization known as semi-definite programming) shows that
On the other hand, Håstad proved that, as a consequence of the PCP Theorem, it is NP-complete to obtain an approximation ratio better than , i.e. if , then
How does one prove such a theorem? Well, the -hardness of MAX-CUT is based on constructing graphs where every optimal solution has a particular structure (which eventually encodes the solution to another NP-hard problem like SATISFIABILITY). Similarly, the NP-hardness of of obtaining even “near-optimal” solutions is proved, in part, by constructing graphs where every solution whose value is close to optimal has some very specific structure (e.g. is close—in some stronger sense—to an optimal solution).
In this way, one of the main steps in proving the inapproximability of -hard problems involves constructing objects which have such a “rigidity” property. This summer school is about how one can use the rigidity of analytic and geometric objects to obtain combinatorial objects with the same property. In fact, assuming something called the “Unique Games Conjecture” (which we will see later), the approximability of many constraint satisfaction problems can be tied directly to the existence of certain geometric configurations.
The first series of lectures will concern the Sparsest Cut problem in graphs and its relationship to bi-lipschitz embeddings of finite metric spaces. In particular, we will look at rigidity properties of “nice” subsets of the Heisenberg group, and how these can be used to prove limitations on a semi-definite programming approach to Sparsest Cut. In the second series, we will see how—assuming the Unique Games Conjecture (UGC)—proving lower bounds on certain simple semi-definite programs actually proves lower bounds against all efficient algorithms. This will entail, among other things, an analytic view of -valued functions, primarily through harmonic analysis.
Sparsest Cut and embeddings
The Sparsest Cut problem is classically described as follows. We have a graph and two functions and , with . The goal is to compute
where we use and . The problem has a number of important applications in computer science.
Computing is NP-hard, but again we can ask for approximation algorithms. The best-known approach is based on computing the value of the Goemans-Linial semi-definite program, , which is
This value can be computed by a semi-definite program (SDP), as we will see. It is an easy exercise to check that , and we can ask for the smallest such that for all -node graphs and all functions , we have
(E.g. it is now known that , with the upper bound proved here, and the lower bound proved here.)
By some duality arguments, one can characterize in a different way. For a metric space , write for the infimal constant such that there exists a mapping satisfying, for all ,
It turns out that
This shows that determining the power of the preceding SDP is intimately connected to understanding bi-lipschitz embeddings into . This is what we will study in the first 6 lectures.
- (Arnaud de Mesmay) In the first lecture, we will be introduced to the basic geometry of the 3-dimensional Heisenberg group , and how differentiation plays a roll in proving lower bounds on bi-lipschitz distortion. In particular, we will see Pansu’s approach for finite-dimensional targets and a generalization to spaces with the RNP, and also why a straightforward generalization would fail for .
- (Mohammad Moharrami) Next, we will see how a differentiation approach to embeddings might work in a toy setting that uses only finite graphs. The study of “monotone subsets” (which is elementary here) also arises in the work of Cheeger and Kleiner in lectures 4 and 5. (See also this post.)
- (Sean Li) Here, we will see that there is an equivalent metric on the Heisenberg group for which embeds isometrically into . This is one half of proving lower bounds on using (1).
- (Jeehyeon Seo and John Mackay) In Lectures 4-5, we’ll look at the approach of Cheeger and Kleiner for proving that does not bi-lipschitz embed into . (Note that these authors previously offered a different approach to non-embeddability, though the one presented in these lectures is somewhat simpler.)
- (Florent Baudier) Finally, in Lecture 6, we see some embedding theorems for finite metric spaces that allow us to prove upper bounds on .
The UGC, semi-definite programs, and constraint satisfaction
In the second series of lectures, we’ll see how rigidity of geometric objects can possibly say something, not just about a single algorithm (like a semi-definite program), but about all efficient algorithms for solving a particular problem.
- (An-Sheng Jhang) First, we’ll review basic Fourier analysis on the discrete cube, and how this leads to some global rigidity theorems for cuts. These tools will be essential later. (See also these lecture notes from Ryan O’Donnell.)
- (Igor Gorodezky) Next, we’ll see a semi-definite program (SDP) for the MAX-CUT problem, and a tight analysis of its approximation ratio (which turns out to be the value we saw earlier).
- (Sam Daitch) In the third lecture, we’ll see the definition of the Unique Games Conjecture, and how it can be used (in an ad-hoc manner, for now) to transform our SDP analysis into a proof that the SDP-based algorithm is optimal (among all efficient algorithms) under some complexity-theoretic assumptions.
- (Deanna Needell) A key technical component of the preceding lecture is something called the Majority is Stablest Theorem that relates sufficiently nice functions on the discrete cube to functions on Gaussian space.
- (Sushant Sachdeva) In the final lecture, we’ll see Raghavendra’s work which shows that, for a certain broad class of NP-hard constraint satisfaction problems, assuming the UGC, the best-possible algorithm is the “canonical” semi-definite program. In other words, the approximation ratio for these problems is completely determined by the existence (or lack thereof) of certain vector configurations in . (See also this post.)