Following some recent applications by Mamoru Tanaka and Laurent Miclo, I was asked where there is a short, no-frills, self-contained, and (possibly) quantitatively non-optimal proof of the higher-order Cheeger inequalities from our paper with Shayan Oveis-Gharan and Luca Trevisan. I thought I would post it here. (If you’re hungering for something new, see this recently posted preprint of my coauthors relating higher eigenvalues to graph expansion.)

*[Update: The application of Miclo can also be done using the higher-order Cheeger inequalities of Louis, Raghavendra, Tetali, and Vempala.]*

The main simplification comes from doing the random partitioning non-optimally with axis-parallel cubes. For ease of notation, we will deal only with regular graphs, but there will be no quantitative dependence on the degree and this assumption can be removed (see the full paper).

Suppose that is a connected, -vertex, -regular graph. Define the Laplacian by , where is the adjacency matrix of . We will think of as acting on , the space of functions equipped with the norm. is positive semi-definite with spectrum

We we define the Rayleigh quotient of a function by

where the numerator is summed over edges of . By the variational principle for eigenvalues, we have

For a subset , we define the *expansion of * by , where is the indicator function of .

Finally, for , we define the *-way expansion constant of * by

where the minimum is over collections of disjoint, non-empty subsets of .

The classical discrete Cheeger inequality asserts that

We will now prove the following generalization. See the full paper for a discussion of the surrounding issues and better quantitative bounds.

First, let’s prove the (easy) LHS of (2). Suppose we have which are disjoint and non-empty and which satisfy . Then certainly is a -dimensional subspace of . On the other hand, every satisfies because if , then

where we have used the fact that if and , then

But now using (1), the subspace witnesses the fact that .

To prove the more difficult RHS of (2), we will use the following discrete Cheeger inequality with boundary conditions.

Lemma 2For any , there is a subset such that

*Proof:* Let . Observe that for each , one has . For , let denote the edges of with exactly one endpoint in . Then we have

On the other hand, thus

implying there exists a such that .

In light of the preceding lemma, to prove the RHS of (2), it suffices to find disjointly supported functions such that for each . Then Lemma 2 guarantees the existence of disjoint subsets of vertices satisfying our desired conclusion.

Toward this end, let denote orthonormal functions satisfying for each , i.e. consider the first eigenfunctions of the Laplacian. Define the map via

We also put . (Since , the function takes value everywhere, hence is never zero and this is well-defined.)

The next lemma shows that, in order to find disjointly supported functions, it suffices to find “separated sets.”

Lemma 3Suppose that for some and , there exist subsets satisfying the conditions:

- For every , we have , and
- For every , if and , then

Then there are disjointly supported functions such that .

*Proof:* For each , we define maps by

Observe that, by the triangle inequality, for , we have

Observe that since is identically on , we have

Next, we argue that the functions are disjointly supported. This immediately implies that the are disjointly supported. If for some , then there are points and such that and . But then by the triangle inequality, , violating condition (ii).

Finally, we bound . For any and , we have

Now using (3),

where we have used the fact that for any non-zero vectors , we have

Using (6) and the fact that in (5) yields

Finally, observe that

Combining this with (7) and (4) shows that

We will first make the task of finding separated sets slightly easier.

Lemma 4Suppose that for some and , there are subsets which satisfy:

- .
- For every , if and , then
- For every ,

Then there are sets satisfying the assumption of Lemma 3 with .

*Proof:* We can form the desired sets by taking disjoint unions of the sets . Begin with the family and repeatedly replace two sets and by their union if they each satisfy .

When we finish, we are left with a collection of sets each member of which satisfies , and possibly one set for which . By (i), we must end with at least sets which each satisfy .

Our final lemma, which finishes the proof of Theorem 1, simply asserts that such sets exist. We will no longer need the fact that comes from eigenfunctions.

Lemma 5Suppose that are orthornormal in . Letand . Then there is an and subsets such that

- .
- For every , if and , then
- For every ,

*Proof:* Consider the matrix which has columns . For any , we have

since the columns of are orthornormal

For a subset of the unit sphere in , we put . Fix some and let denote the diameter of , then by (8), we have

Now, let be a partition of into axis-parallel squares of side length . For any such square , we let denote the set of points which are at Euclidean distance at least from every side of . Observe that

Consider now the collection of sets . Since , (9) implies that . Furthermore, by construction, for any and with , we have

Thus the collection of sets satisfy both conditions (ii) and (iii) of the lemma. We are left to verify (i).

Note that . If we choose a uniformly random axis-parallel translation of the partition , then (10) implies

In particular, there exists some fixed partition that achieves this bound. For this partition , the sets satisfy all three conditions of the lemma, completing the proof.

how does this proof relate to the original proof of gharan and trevisan in http://arxiv.org/pdf/1107.2686v1.pdf?

Comment by s.e. — February 25, 2013 @ 6:40 am

I do not understand the formula (3), please board can make more explicit. thank you.

Comment by Nguyen Van Phuc _ Viet Nam — April 17, 2013 @ 6:20 pm

I agree, there are a number of steps contained in that inequality. First, observe that if S is a set in a metric space and d(x,S) is the distance from x to the closest point in S, then . In other words, the map f(x)=d(x,S) is 1-Lipschitz. Now observe that scaling a 1-Lipschitz map by C, it becomes C-Lipschitz. Adding a constant, it remains C-Lipschitz. Finally, if f is C-Lipschitz, then the map g(x)=max(0,f(x)) is also C-Lipschitz. Combine all these together for the distance and you get (3).

Comment by James Lee — April 17, 2013 @ 6:32 pm

Thanks to Professor Lee very much. I am very interested in this problem, because I am learning, learn about it in earnest.

Comment by Nguyen Van Phuc _ Viet Nam — April 17, 2013 @ 8:47 pm

Dear Mr. Lee.

I think, there is a typographical error in Lemma 3 does not.

$\mathcal{R}(\psi_i) \leq 25k / (\varepsilon \delta^2) $

to be $\mathcal{R}(\psi_i) \leq 25k \lambda_k / (\varepsilon \delta^2) $.

Because at the end of the line when we demonstrated that the combination of the three that still \lambda_k.

Comment by Nguyen Van Phuc_Viet Nam — May 11, 2013 @ 1:26 am

and if we choose $$ \delta = \frac{1}{2 \sqrt{2} k^{5/2}} $$ as in 2. of Lemma 5 it is:

$$ \mathcal{R}(\psi_i) \leq 800 k^6 \lambda_k $$

better than $$ \mathcal{R}(\psi_i) \leq 30^2 k^6 \lambda_k $$

this true?

Thank you much.

Comment by Nguyen Van Phuc_Viet Nam — May 11, 2013 @ 1:56 am

Dear Mr. Lee.

I think, there is a typographical error in Lemma 3 does not.

to be .

Because at the end of the line when we demonstrated that the combination of the three that still .

[Thanks. I fixed it. --j]

Comment by Nguyen Van Phuc — May 12, 2013 @ 12:01 am

and if we choose as in 2. of Lemma 5 it is:

better than

this true?

Thank you much.

Comment by Nguyen Van Phuc — May 12, 2013 @ 12:03 am