tcs math – some mathematics of theoretical computer science

June 15, 2008

Eigenvalue multiplicity and growth of groups

Filed under: Math — Tags: , , — James Lee @ 9:21 am

This post is less about mathematics in TCS as it is about mathematics around TCS–specifically spectral graph theory and the structure of finite groups. Earlier this year at an IPAM conference on expander graphs, Terry Tao presented Bruce Kleiner’s new proof of Gromov’s theorem. After the talk, Luca Trevisan asked whether there exists an analog of certain steps in the proof for finite groups. Recently, Yury Makarychev and I gave a partial answer to Luca’s question in our paper Eigenvalue multiplicity and volume growth.

Gromov’s theorem

Let G be an infinite, finitely-generated group with a finite, symmetric generating set S = S^{-1}. One defines the Cayley graph \mathsf{Cay}(G;S) as an undirected |S|-regular graph with vertex set G, and which has an edge \{u,v\} whenever u = vs for some s \in S.

We let B(R) denote the set of all elements in G that can be written as a product of at most R generators (B(R) is a ball radius R about the identity, in the word metric). G is said to have polynomial growth if there exists a number m \in \mathbb N such that

|B(R)| = O(R^m)

as R \to \infty. Polynomial growth is a property of the group, and does not depend on the choice of finite generating set S (because one can express any two fixed generating sets in terms of each other with words of length O(1)).

It is straightforward, for instance, that every finitely generated abelian group has polynomial growth, since |B(R)| \leq {R \choose d}, where d = |S|. Wolf proved a generalization of this: In fact, it holds for every nilpotent group. On the other hand, the free group on two generators does not have polynomial growth, since |B(R)| = 2^R.

Notice also that every finite group has polynomial growth trivially. This fact extends a bit: If G is an arbitrary group and N is a subgroup of index O(1), then polynomial growth for N implies polynomial growth for G. Combining this with the result of Wolf, we see that: A group has polynomial growth if it has a nilpotent subgroup of finite index. In a stunning work, Gromov proved the conjecture of Milnor that this sufficient condition is also necessary: Every finitely generated group of polynomial growth has a nilpotent subgroup of finite index.

Gromov’s proof

Imagine starting with the integer lattice \mathbb Z^2, and slowly zooming out so that the gaps in the grid become smaller and smaller. As you move far enough away, the grid seems to morph into the continuum \mathbb R^2. Gromov defines this process abstractly, and shows that every group of polynomial growth “converges” to a finite-dimensional limit object on which the group acts by isometries (just as \mathbb Z^2 acts on \mathbb R^2 by translation). Finally, the Gleason-Montgomery-Zippin-Yamabe structure theory of locally compact groups is used to classify the limit object. The jump from geometry (polynomial growth of balls) to algebra is encapsulated in the following result.

Theorem 1: If G is a finitely generated infinite group of polynomial growth, then either

1. There exists a sequence of finite-dimensional linear representations \rho_i : G \to GL_k(\mathbb C) with |\rho_i(G)| \to \infty, or

2. There exists a single finite-dimensional linear representation \rho : G \to GL_k(\mathbb C), with |\rho(G)| infinite.

Here, GL_k(\mathbb C) is the general linear group. (Kleiner’s proof, which I’ll discuss momentarily, shows that actually (2) always holds.)

From Theorem 1, and work of Jordan and Tits, Gromov is able to conclude the following.

Theorem 2: Let G be a finitely generated infinite group of polynomial growth. Then G has a finite index subgroup which admits a homomorphism onto \mathbb Z.

From this fact, and a theorem of Milnor on solvable groups, an induction on the degree of growth finishes the argument (see Tits’ appendix in Gromov’s paper for a 2-page version of this argument).

What about finite groups?

Luca’s question concerned a (quantitative) version of Theorem 1 for finite groups. In this case, it’s not even clear how one defines “polynomial growth.” A possible definition is that there exists a generating set S such that in \mathsf{Cay}(G;S), one has |B(R)| \leq C R^m for some numbers C,m. Unfortunately, this property seems quite unwieldy in the finite case. We make a stronger assumption, using the doubling constant

c_{G;S} = \displaystyle \max_{R \geq 0} \frac{|B(2R)|}{|B(R)|}.

Observe that in the infinite case, c_G < \infty implies polynomial growth. It turns out (though it requires Gromov’s theorem to prove!) that if an infinite Cayley graph has polynomial growth, then it also has c_G < \infty, in fact it must satisfy R^m/C \leq |B(R)| \leq C R^m for some C, m. It is unclear whether a similar phenomenon holds in the finite case.

We prove the following quantitative analogs of Theorems 1 and 2 above, for finite groups.

Theorem 1 (finite): Let G be a finite group with symmetric generating set S. Then there exist constants k and \delta, depending only on the doubling constant c_{G;S}, such that G has a linear representation \rho : G \to GL_k(\mathbb R) with |\rho(G)| \geq |G|^{\delta}.

Theorem 2 (finite): Let G be a finite group with symmetric generating set S. Then there exist constants \alpha and \varepsilon, depending only on the doubling constant c_{G;S}, such that G has a normal subgroup N having index at most \alpha, and N admits a homomorphism onto the cyclic group \mathbb Z_M, where M \geq |G|^{\varepsilon}.

In fact, if c = c_{G;S}, then one can take k \approx e^{\log^2 c} and \delta \approx 1/\log(c) in the first theorem, and \alpha \approx k^{k^2} and \varepsilon \approx 1/k in the second.

Eigenvalue multiplicity and the Laplacian

It seems hopeless to use Gromov’s approach for finite groups; indeed, quite literally, zooming out from a finite group converges to a single point. Kleiner’s remarkable new proof is discussed in detail in Terry Tao’s blog entry. He completely avoids Gromov’s limiting process, and the difficult classification of the resulting limit objects. Instead, his proof is based on estimating the dimension of the space of harmonic functions of fixed polynomial growth on the Cayley graph of G. Kleiner’s approach is inspired by similar work of Colding and Minicozzi in the setting of non-negatively curved manifolds.

Define the discrete Laplacian on functions f : G \to \mathbb R by

\displaystyle \Delta f(x) = f(x) - \frac{1}{|S|} \sum_{s \in S} f(xs)

A harmonic function f on G is one for which \Delta f \equiv 0. It is straightforward to verify that every harmonic function on a connected, finite graph is constant, so again we seem stuck for finite groups.

Fortunately, though, the Laplacian is very nice on finite graphs. In particular, it is a self-adjoint operator on the |G|-dimensional space of functions L^2(G) = \{ f : G \to \mathbb R \}, with eigenvalues 0 = \lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_{|G|}. The second eigenspace of \Delta is the subspace W_2 \subseteq L^2(G) given by W_2 = \{ f \in L^2(G) : \Delta f = \lambda_2 f \}, and the (geometric) multiplicity of \lambda_2 is defined to be \mathsf{dim}(W_2). In some sense, W_2 contains the most “harmonic-like” functions on G which are orthogonal to the constant functions. The basis of our strategy is the following theorem, which is proved by “scaling down” the approach of Colding-Minicozzi and Kleiner. In order to prove that these functions are “harmonic enough,” we need precise bounds on \lambda_2 in terms of c_{G;S}, which we obtain in the paper. This yields the following theorem.

Theorem: For G finite, the multiplicity of the 2nd eigenvalue of the Laplacian on \mathsf{Cay}(G;S) is at most \displaystyle \exp\left(\log^2(c_{G;S})\right).

(In fact, we prove more general bounds on the multiplicity of higher eigenvalues, and more general graphs than Cayley graphs.)

To pass from this to Theorem 1 (finite) above, we use the fact that G acts on L^2(G) via the action \rho(g) f(x) = f(g^{-1} x). It is easy to see that this action commutes with the Laplacian, i.e. \rho(g) \Delta f = \Delta (\rho(g) f), so that G in fact acts by linear transformations on the second eigenspace W_2. Thus in order to finish the proof of Theorem 1 (finite), we need only show that |\rho(G)| is large.

It turns out that if |\rho(G)| is too small, then we can pass to a small quotient group, and every second eigenfunction f pushes down to an eigenfunction on the quotient. This allows us to bound \lambda_2 on the quotient group in terms of \lambda_2 on G. But \lambda_2 on a small, connected graph cannot be too close to zero by the discrete Cheeger inequality. In this way, we arrive at a contradiction if the image of the action is too small. Theorem 2 (finite) is then a simple corollary of Theorem 1 (finite), using a theorem of Jordan on finite linear groups.

Finally, note that the ideal algebraic conclusion of such a study is a statement of the form: There exists a normal subgroup N \leq G of index O(1), and such that N is an O(1)-step nilpotent group, where the O(1) notation hides a constant that depends only on the growth data of G. It is not clear that such a strong property can hold under only an assumption on c_{G;S} for some fixed generating set S. One might need to make assumptions on every generating set of G, or even geometric assumptions on families of subgroups in G. Defining a simple condition on G and its generators that can achieve the full algebraic conclusion is an intriguing open problem.

The Shocking Blue Green Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 71 other followers