# Lecture 4: Conformal mappings, circle packings, and spectral geometry

In Lecture 2, we used spectral partitioning to rule out the existence of a strong parallel repetition theorem for unique games.  In practice, spectral methods are a very successful heuristic for graph partitioning, and in the present lecture we’ll see how to analyze these partitioning algorithms for some common families of graphs.

### Balanced separators, eigenvalues, and Cheeger’s inequality

Lipton and Tarjan proved that every planar graph has a negligibly small set of nodes whose remval splits the graph into two roughly equal pieces.  More specifically, every n-node planar graph can be partitioned into three disjoint sets $A,B,S$ such that there are no edges from $A$ to $B$, the separator $S$ has at most $O(\sqrt{n})$ nodes, and $|A|,|B| \geq n/3$.  This allows one to do all sorts of things, e.g. a simple divide-and-conquer algorithm gives a linear time $(1+\epsilon)$-approximation for the maximum independent set problem in such graphs, for any $\epsilon > 0$.

So there is a natural question of how well spectral methods do, for example, on planar graphs.  Spielman and Teng showed that for bounded-degree planar graphs, a simple recursive spectral algorithm recovers a partition $V=A \cup B$ of the vertex set so that $|E(A,B)| = O(\sqrt{n})$.  In other words, for bounded-degree planar graphs, spectral methods recover the Lipton-Tarjan separator theorem!  This is proved by combining Cheeger’s inequality with their main theorem.

Theorem [Spielman-Teng]: Every n-node planar graph with maximum degree $d_{\max}$ has $\displaystyle \lambda_2(G) = O\left(\frac{d_{\max}}{n}\right)$, where $\lambda_2(G)$ is the second eigenvalue of the combinatorial Laplacian on $G$.

Recall that we introduced the combinatorial Laplacian in Lecture 2.  If $G=(V,E)$ is an arbitrary finite graph, in this lecture it will make more sense to think about the Laplacian $\Delta$ as an operator on functions $f : V \to \mathbb R$ given by

$\displaystyle \Delta f(x) = \mathrm{deg}(x) f(x) - \sum_{y : xy \in E} f(y).$

If we define the standard inner product $\langle f,g\rangle = \sum_{x \in V} f(x)g(x)$, then one can easily check that for any such $f$, we have $\langle f, \Delta f\rangle = \sum_{xy \in E} |f(x)-f(y)|^2$.  In particular, this implies that $\Delta$ is a positive semi-definite operator.  If we denote its eigenvalues by $\lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n$, then it is also easy to check that $\lambda_1 = 0$, with corresponding eigenfunction $f(x)=1$ for every $x\in V$.

Thus by standard variational principles, we have

$\displaystyle \lambda_2 = \min_{f \neq 0 : \sum_{x \in V} f(x)=0} \frac{\sum_{xy \in E} |f(x)-f(y)|^2}{\sum_{x \in V} f(x)^2}.$

Let us also define the Cheeger constant $h_G$.  For an arbitrary subset $S \subseteq V$, let

$\displaystyle h(S) = \frac{|E(S, \bar S)|}{\min(|S|,|\bar S|)}$,

note that this definition varies from the $h$ we defined in Lecture 2, because we will be discussing eigenfunctions without boundary conditions.  Now one defines $h_G = \min_{S \subseteq V} h(S)$.

Finally, we have the version of Cheeger’s inequality (proved by Alon and Milman in the discrete setting) for graphs without boundary.

Cheeger’s inequality: If $G=(V,E)$ is any graph with maximum degree $d_{\max}$, then

$\displaystyle \lambda_2(G) \geq \frac{h(G)^2}{2d_{\max}}.$

This follows fairly easy from the Dirichlet version of Cheeger’s inequality presented in Lecture 2.  Here’s a sketch:  Let $f : V \to \mathbb R$ satisfy $\Delta f = \lambda_2 f$, and suppose, without loss of generality, that $V_+ = \{ x : f(x) > 0 \}$ has $|V_+| \geq n/2$.  Define $f_+(x)=f(x)$ for $f(x) > 0$ and $f_+(x)=0$ otherwise.  Then $f_+|_B = 0$ for $B = V \setminus V_+$, so we can plug $f_+$ into the Dirichlet version of Cheeger’s inequality with boundary conditions on $B$.  For the full analysis, see this note which essentially follows this approach.  By examining the proof, note that one can find a subset $S \subseteq V$ with $h(S) \leq \sqrt{2 d_{\max} \lambda_2}$ by a simple “sweep” algorithm:  Arrange the vertices $V = \{v_1, v_2, \ldots, v_n\}$ so that $f(v_1) \leq f(v_2) \leq \cdots \leq f(v_n)$, and output the best of the $n-1$ cuts $\{v_1, \ldots, v_i\}, \{v_{i+1}, \ldots, v_n\}$.

So using the eigenvalue theorem of Spielman and Teng, along with Cheeger’s inequality, we can find a set $S \subseteq V$ with $h(S) \lesssim \sqrt{d_{\max}/n}$.  While this cut has the right Cheeger constant, it is not necessarily balanced (i.e. $\min(S, \bar S)$ could be very small).  But one can apply this algorithm recursively, perhaps continually cutting small chunks off of the graph until a balanced cut is collected.  Refer to the Spielman-Teng paper for details.  A great open question is how one might use spectral information about $G$ to recover a balanced cut immediately, without the need for recursion.

### Conformal mappings and circle packings

Now we focus on proving the bound $\lambda_2(G) \lesssim d_{\max}/n$ for any planar graph $G$.  A natural analog is to look at what happens for the Laplace-Beltrami operator for a Riemannian metric on the 2-sphere.  In fact, Hersch considered this problem almost 40 years ago and proved that $\lambda_2(M) \lesssim 1/\mathrm{vol}(M)$, for any such Riemannian manifold $M$.  His approach was to first use the uniformization theorem to get a conformal mapping from $M$ onto $S^2$, and then try to pull-back the standard second eigenfunctions on $S^2 \subseteq \mathbb R^3$ (which are just the three coordinate projections).  Since the Dirichlet energy is conformally invariant in dimension 2, this almost works, except that the pulled-back map might not be orthogonal to the constant function.  To fix this, he has to post-process the initial conformal mapping with an appropriate Möbius transformation.

Unaware of Hersch’s work, Spielman and Teng derived eigenvalue bounds for planar graphs using the discrete analog of this approach:  Circle packings replace conformal mappings, and one still has to show the existence of an appropriate post-processing Möbius transformation.

# Lecture 2: Spectral partitioning and near-optimal foams

In the last lecture, we reduced the problem of cheating in $\mathcal G_m^{\otimes k}$ (the k-times repeated m-cycle game) to finding a small set of edges $\mathcal E$ in $(\mathbb Z_m^k)_\infty$ whose removal eliminates all topologically non-trivial cycles.  Such a set $\mathcal E$ is called a spine. To get some intuition about how many edges such a spine should contain, let’s instead look at a continuous variant of the problem.

### Spines, Foams, and Isoperimetry

Consider again the $k$-dimensional torus $\mathcal T^k = \mathbb R^k/\mathbb Z^k$, which one can think of as $\lbrack 0,1)^k$ with opposite sides identified.  Say that a nice set (e.g. a compact, $C^\infty$ surface) $\mathcal E \subseteq \mathcal T^k$ is a spine if it intersects every non-contractible loop in $\mathcal T^k$.  This is the continuous analog of a spine in $(\mathbb Z_m^k)_\infty$.  We will try to find such a spine $\mathcal E$ with surface area, i.e. $\mathrm{Vol}_{k-1}(\mathcal E)$, as small as possible.

Let’s consider some easy bounds.  First, it is clear that the set

$\displaystyle \mathcal E = \left\{(x_1, \ldots, x_k) \in [0,1)^k : \exists i \in \{1,2,\ldots,k\}, x_i = 0\right\}$

is a spine with $\mathrm{Vol}_{k-1}(\mathcal E) = k$.  (A moment’s thought shows that this is “equivalent” to the provers playing independent games in each coordinate of $\mathcal G_m^{\otimes k}$.)

To get a good lower bound, it helps to relate spines to foams which tile $\mathbb R^k$ according to $\mathbb Z^k$, as follows.  Take two potential spines.

To determine which curve is actually a spine, we can repeatedly tile them side-by-side.

The first tiling contains the blue bi-infinite curve, which obviously gives a non-trivial cycle in $\mathcal T^k$, while the second yields a tiling of $\mathbb R^k$ by bodies of volume 1.  It is easy to deduce the following claim.

Claim: A surface $\mathcal E \subseteq \mathcal T^k$ is a spine if and only if it induces a tiling of the plane by bodies of volume 1 which is invariant under shifts by $\mathbb Z^k$.

By the isoperimetric inequality in $\mathbb R^k$, this immediately yields the bound

$\displaystyle \mathrm{Vol}_{k-1}(\mathcal E) \geq \mathrm{Vol}_{k-1}(S^{k-1}) \approx \sqrt{k},$

where $S^{k-1}$ is the unit $(k-1)$-dimensional sphere.

So the the surface area of an optimal spine lies somewhere between $k$ and $\sqrt{k}$.  On the one hand, cubes tile very nicely but have large surface area.  On the other hand, we have sphere-like objects which have small surface area, but don’t seem (at least intuitively) to tile very well at all.  As first evidence that this isn’t quite right, note that it is known how to cover $\mathbb R^k$ by disjoint bodies of volume at most 1 so that the surface area/volume ratio grows like $\sqrt{k}$.  See Lemma 3.16 in this paper, which is based on Chekuri, et. al.  It’s just that these covers are not invariant under $\mathbb Z^k$ shifts.

Before we reveal the answer, let’s see what consequences the corresponding discrete bounds would have for parallel repetition of the m-cycle game.  If the “cube bound” were tight, we would have $\mathsf{val}(\mathcal G_m^{\otimes k}) \approx 1 - \frac{k}{m}$, which doesn’t rule out a strong parallel repetition theorem ($\alpha^*=1$ in the previous lecture).  If the “sphere bound” were tight, we would have $\mathsf{val}(\mathcal G_m^{\otimes k}) \approx 1 - \frac{\sqrt{k}}{m}$, which shows that $\alpha^* \geq 2$.   In the latter case, the approach to proving equivalence of the UGC and MAX-CUT conjectures doesn’t even get off the ground.

As the astute reader might have guessed, recently Ran Raz proved that $\mathsf{val}(\mathcal G_m^{\otimes k}) \geq 1 - C\frac{\sqrt{k}}{m}$ for some constant $C > 0$, showing that a strong parallel repetition theorem—even for unique games—is impossible.   Subsequently, Kindler, O’Donnell, Rao, and Wigderson showed that there exists a spine $\mathcal E \subseteq \mathcal T^k$ with $\mathrm{Vol}_{k-1}(\mathcal E) \approx \sqrt{k}$.  While it is not difficult to show that the continuous result implies Raz’s discrete result, we will take a direct approach found recently by Alon and Klartag.

# Lecture 1: Cheating with foams

This the first lecture for CSE 599S:  Analytical and geometric methods in the theory of computation.  Today we’ll consider the gap amplification problem for 2-prover games, and see how it’s intimately related to some high-dimensional isoperimetric problems about foams.  In the next lecture, we’ll use spectral techniques to find approximately optimal foams (which will then let us cheat at repeated games).

### The PCP Theorem, 2-prover games, and parallel repetition

For a 3-CNF formula $\varphi$, let $\mathsf{sat}(\varphi)$ denote the maximum fraction of clauses in $\varphi$ which are simultaneously satisfiable. For instance, $\varphi$ is satisfiable if and only if $\mathsf{sat}(\varphi)=1$. One equivalent formulation of the PCP Theorem is that the following problem is NP-complete:

Formulation 1.

Given a 3-CNF formula $\varphi$, answer YES if $\mathsf{sat}(\varphi)=1$ and NO if $\mathsf{sat}(\varphi) \leq 0.9$ (any answer is acceptable if neither condition holds).

We can restate this result in the language of 2-prover games. A 2-prover game $\mathcal G$ consists of four finite sets $Q,Q',A,A'$, where $Q$ and $Q'$ are sets of questions, while $A$ and $A'$ are sets of answers to the questions in $Q$ and $Q'$, respectively. There is also a verifier $V : Q \times Q' \times A \times A' \to \{0,1\}$ which checks the validity of answers. For a pair of questions $(q,q')$ and answers $(a,a')$, the verifier is satisfied if and only if $V(q,q',a,a')=1$. The final component of $\mathcal G$ is a probability distribution $\mu$ on $Q \times Q'$.

Now a strategy for the game consists of two provers $P : Q \to A$ and $P' : Q' \to A'$ who map questions to answers. The score of the two provers $(P,P')$ is precisely

$\displaystyle\mathsf{val}_{P,P'}(\mathcal G) = \Pr \left[V(q, q', P(q), P(q'))=1\vphantom{\bigoplus}\right],$

where $(q,q')$ is drawn from $\mu$. This is just the probability that the verifier is happy with the answers provided by the two provers. The value of the game is now defined as

$\displaystyle\mathsf{val}(\mathcal G) = \max_{P,P'} \mathsf{val}_{P,P'}(\mathcal G),$

i.e. the best-possible score achievable by any two provers.

Now we can again restate the PCP Theorem as saying that the following problem is NP-complete:

Formulation 2.

Given a 2-prover game with $\mathcal G$ with $|A|,|A'|=O(1)$, answer YES if $\mathsf{val}(\mathcal G)=1$ and answer NO if $\mathsf{val}(\mathcal G) \leq 0.99$.

To see that Formulation 1 implies Formulation 2, consider, for any 3-CNF formula $\varphi$, the game $\mathcal G_{\varphi}$ defined as follows. $Q$ is the set of clauses in $\varphi$, $Q'$ is the set of variables in $\varphi$, while $A = \{ TTT, TTF, TFT, FTT, TFF, FTF, FFT, FFF \}$ and $A' = \{T, F\}$. Here $A$ represents the set of eight possible truth assignments to a three-variable clause, and $A'$ represents the set of possible truth assignments to a variable.

The distribution $\mu$ is defined as follows: Choose first a uniformly random clause $C \in Q$, and then uniformly at random one of the three variables $x \in Q'$ which appears in $C$. An answer $(a_C, a_x) \in A \times A'$ is valid if the assignment $a_C$ makes $C$ true, and if $a_x$ and $a_C$ are consistent in the sense that they give the same truth value to the variable $x$. The following statement is an easy exercise:

For every 3-CNF formula $\varphi$, we have $\mathsf{val}(\mathcal G_{\varphi}) = \frac23 + \frac13 \mathsf{sat}(\varphi)$.

The best strategy is to choose an assignment $\mathcal A$ to the variables in $\varphi$$P'$ plays according to $\mathcal A$, while $P$ plays according to $\mathcal A$ unless he is about to answer with an assignment that doesn’t satisfy the clause $C$.   At that point, he flips one of the literals to make his assignment satisfying (in this case, the chance of catching $P$ cheating is only $1/3$, the probability that $P'$ is sent the variable that $P$ flipped).

This completes our argument that Formulation 1 implies Formulation 2.

Parallel repetition
A very natural question is whether the constant $0.9$ in Formulation 2 can be replaced by $0.001$ (or an arbitrarily small constant). A natural way of gap amplification is by “parallel repetition” of a given game. Starting with a game $\mathcal G = \langle Q,Q',A,A',V,\mu \rangle$, we can consider the game $\mathcal G^{\otimes k} = \langle Q^k, Q'^k, A^k, A'^k, V^{\otimes k}, \mu^{\otimes k} \rangle$, where $\mu^{\otimes k}$ is just the product distribution on $(Q \times Q')^k \cong Q^k \times Q'^k$. Here, we choose $k$ pairs of questions $(q_1, q'_1), \ldots, (q_k, q'_k)$ i.i.d. from $\mu$ and the two provers then respond with answers $(a_1, \ldots, a_k) \in A^k$ and $(a'_1, \ldots, a'_k) \in A'^k$. The verifier $V^{\otimes k}$ is satisfied if and only if $V(q_i, q'_i, a_i, a'_i)=1$ for every $i = 1, 2, \ldots, k$.

Clearly $\mathsf{val}(\mathcal G^{\otimes k}) \geq \mathsf{val}(G)^k$ because given a strategy $(P,P')$ for $\mathcal G$, we can play the same strategy in every coordinate, and then our probability to win is just the probability that we simultaneously win $k$ independent games. But is there a more clever strategy that can do better? Famously, early papers in this arena assumed it was obvious that $\mathsf{val}(\mathcal G^{\otimes k}) = \mathsf{val}(\mathcal G)^k$.

In fact, there are easy examples of games $\mathcal G$ where $\mathsf{val}(\mathcal G^{\otimes 2}) = \mathsf{val}(\mathcal G) = \frac12$.  (Exercise: Show that this is true for the following game devised by Uri Feige. The verifier chooses two independent random bits $b, b' \in \{0,1\}$, and sends $b$ to $P$ and $b'$ to $P'$. The answers of the two provers are from the set $\{1,2\} \times \{0,1\}$. The verifier accepts if both provers answer $(1,b)$ or both provers answer $(2,b')$.)

Nevertheless, in a seminal work, Ran Raz proved the Parallel Repetition Theorem, which states that the value of the repeated game does, in fact, drop exponentially.

Theorem 1.1: For every 2-prover game $\mathcal G$, there exists a constant $c = O(\log(|A|+|A'|))$ such that if $\mathsf{val}(\mathcal G) \leq 1-\epsilon$, then for every $k \in \mathbb N$,

$\displaystyle\mathsf{val}(\mathcal G^{\otimes k}) \leq (1-\epsilon^{3})^{k/c}.$

The exponent 3 above is actually due to an improvement of Holenstein (Raz’s original paper can be mined for an exponent of 32).