Random tree embeddings

Random embeddings

When it is impossible to achieve a low-distortion embedding of some space {X} into another space {Y}, we can consider more lenient kinds of mappings which are still suitable for many applications. For example, consider the unweighted {n}-cycle {C_n}. It is known that any embedding of {C_n} into a tree metric incurs distortion {\Omega(n)}. On the other hand, if we delete a uniformly random edge of {C_n}, this leaves us with a random tree (actually a path) {C'_n} such that for any {x,y \in C_n}, we have

\displaystyle  d_{C'_n}(x,y) \geq d_{C_n}(x,y) \geq \frac12\, \mathop{\mathbb E}[d_{C'_n}(x,y)].

In other words, in expectation the distortion is only 2.

Let {(X,d)} be a finite metric space, and let {\mathcal F} be a family of finite metric spaces. A stochastic embedding from {X} into {\mathcal F} is a random pair {(F,Y)} where {Y \in \mathcal F} and {F : X \rightarrow Y} is a non-contractive mapping, i.e. such that {d_Y(F(x),F(y)) \geq d_X(x,y)} for all {x,y \in X}. The distortion of {F} is defined by

\displaystyle  \max_{x,y \in X} \frac{\mathop{\mathbb E} \left[d_Y(F(x),F(y))\right]}{d_X(x,y)}\,.

We will now argue that every finite metric space admits a surprisingly good stochastic embedding into tree metrics. The next result is from Fakcharoenphol, Rao, and Talwar, following work by Bartal and Alon, Karp, Peleg, and West.

Theorem 1 Every {n}-point metric space {(X,d)} admits a stochastic embedding into the family of tree metrics, with distortion {O(\log n)}.

We will need the random partitioning theorem we proved last time:

Theorem 2 For every {\Delta > 0}, there is a {\Delta}-bounded random partition {\mathcal P} of {X} which satisfies, for every {x \in X} and {0 < r \leq \Delta/8},

\displaystyle   \mathop{\mathbb P}\left(B(x,r) \nsubseteq \mathcal P(x)\right) \leq \frac{8r}{\Delta} H\left(|B(x,\Delta/8)|,|B(x,\Delta)|\right). \ \ \ \ \ (1)

From partitions to trees. Before proving the theorem, we discuss a general way of constructing a tree metric from a sequence of partitions. Assume (by perhaps scaling the metric first) that {1 < d(x,y) \leq 2^M} for all {x,y \in X} with {x \neq y}. Let {P_0, P_1, \ldots, P_M} be partitions of {X}, where {P_k} is {2^k}-bounded. We will assume that {P_M = \{X\}} and {P_0} is a partition of {X} into singleton sets.



Now we inductively construct a tree metric {T = T(P_0, P_1, \ldots, P_M)} as follows. The nodes of the tree will be of the form {(k,S)} for {k \in \{0,1,\ldots,M\}} and {S \subseteq X}. The root is {(M,X)}. In general, if the tree has a node of the form {(j,S)} for {j > 0}, then {(j,S)} will have children

\displaystyle \{ (j-1, S \cap T) : T \in P_{j-1} \textrm{ and } S \cap T \neq \emptyset\}.

The length of an edge of the form {\{(j,S), (j-1,S')\}} is {2^{j}}. This specifies the entire tree {T}. We also specify a map {F : X \rightarrow T} by {F(x) = (0,\{x\})}. We leave the following claim to the reader.

Claim 1 For every {x,y \in X},

\displaystyle  d_{T}(F(x),F(y)) = 2 \sum_{k=1}^{j+1} 2^k = 2^{j+3}-4.

where {j \in \{0,1,\ldots,M\}} is the largest index with {P_j(x) \neq P_j(y)}.

Note, in particular, that {F : X \rightarrow T} is non-contracting because if {2^j < d(x,y) \leq 2^{j+1}} for some {j \geq 0}, then {P_j(x) \neq P_j(y)} since {P_j} is {2^{j}}-bounded, implying that {d_{T}(F(x),F(y)) \geq 2^{j+3}-4 \geq 2^{j+1}}.

Now we are ready to prove Theorem 1.

Proof: Again, assume that {1 < d(x,y) \leq 2^M} for all {x,y \in X}. For {0 < k < M}, let {\mathcal P_k} be the {2^k}-bounded random partition guaranteed by Theorem 2. Let {\mathcal P_0} be a partition into singletons, and let {\mathcal P_M = \{X\}}. Finally, let {\mathcal T=\mathcal T(\mathcal P_0, \ldots, \mathcal P_M)} be the tree constructed above, and let {F : X \rightarrow \mathcal T} be the corresponding (random) non-contractive mapping.

Now, fix {x,y \in X} and an integer {h} such that {2^h < d(x,y) \leq 2^{h+1}}. Using Claim 1 and (1), we have,

\displaystyle  \mathop{\mathbb E}\left[d_{\mathcal T}(F(x),F(y))\right] \leq \sum_{j=0}^M \mathop{\mathbb P}[\mathcal P_j(x) \neq \mathcal P_j(y)] \cdot 2^{j+3}

    \displaystyle  \leq O(d(x,y)) + \sum_{j=h+3}^M \frac{8\,d(x,y)}{2^j} 2^{j+3} H\left(|B(x, 2^{j-3})|,|B(x,2^{j})|\right)

    \displaystyle  \leq O(d(x,y)) + \sum_{j=h+3}^M \frac{8\,d(x,y)}{2^j} 2^{j+3} H\left(|B(x, 2^{j-3})|,|B(x,2^{j})|\right)

   \displaystyle  \leq O(d(x,y)) + O(1) \,d(x,y) \sum_{j=h+3}^M H\left(|B(x, 2^{j-3})|,|B(x,2^{j})|\right)

   \displaystyle  \leq O(H(1,n))\,d(x,y)

   \displaystyle  \leq O(\log n)\,d(x,y),

where in the penultimate line, we have evaluated three disjoint telescoping sums: For any numbers {1 \leq a_1 \leq a_2 \leq \cdots \leq a_k},

\displaystyle H(a_1,a_2)+H(a_2,a_3)+\cdots+H(a_{k-1},a_k)=H(a_1,a_k).


Since every tree embeds isometrically into {\ell_1}, this offers an alternate proof of Bourgain’s theorem when the target space is {\ell_1}. Since we know that expander graphs require {\Omega(\log n)} distortion into {{\ell}_1}, this also shows that Theorem 1 is asymptotically tight.

Random partitions of metric spaces

In addition to the current sequence on Talagrand’s majorizing measures theory, I’m also going to be putting up a series of lecture notes about embeddings of finite metric spaces. The first few will concern random partitions of metric spaces, and their many applications.

Random partitions

Often when one is confronted with the problem of analyzing some problem on a metric space {(X,d)}, a natural way to proceed is by divide and conquer: Break the space into small pieces, do something (perhaps recursively) on each piece, and then glue these local solutions together to achieve a global result. This is certainly an important theme in areas like differential geometry, harmonic analysis, and computational geometry.

In the metric setting, “small” often means pieces of small diameter. Of course we could just break the space into singletons {X = \{x_1\} \cup \{x_2\} \cup \cdots}, but this ignores the second important aspect in a “divide and conquer” approach: what goes on at the boundary. If we are going to combine our local solutions together effectively, we want the “boundary” between distinct pieces of the partition to be small. Formalizing this idea in various ways leads to a number of interesting ideas which I’ll explore in a sequence of posts.

Example: The unit square

Consider the unit square {[0,1]^2} in the plane, equipped with the {\ell_1} metric. If we break the square into pieces of side length {\delta/2}, then every piece has diameter at most {\delta}, and a good fraction of the space is far from the boundary of the partition. In fact, it is easy to see that e.g. 60% of the measure is at least distance {\delta/20} from the boundary (the red dotted line).


But there is a problem with using this as a motivating example. First, observe that we had both a metric structure (the {\ell_1} distance) and a measure (in this case, the uniform measure on {[0,1]^2}). In many potential applications, there is not necessarily a natural measure to work with (e.g. for finite metric spaces, where counting the number of points is often a poor way of measuring the “size” of various pieces).

To escape this apparent conundrum, note that the uniform measure is really just a distraction: The same result holds for any measure on {[0,1]^2}. This is a simple application of the probabilistic method: If we uniformly shift the {\delta/2}-grid at random, then for any point {x \in [0,1]^2}, we have

\displaystyle {\mathbb P}(x \textrm{ is } \delta/20\textrm{-far from the boundary}) \geq 0.6.

Thus, by averaging, for any measure {\mu} on {[0,1]^2} there exists a partition where 60% of the {\mu}-measure is {\delta/20}-far from the boundary. This leads us to the idea that, in many situations, instead of talking about measures, it is better to talk about distributions over partitions. In this case, we want the “boundary” to be small on “average.”

Lipschitz random partitions

We will work primarily with finite metric spaces to avoid having to deal with continuous probability spaces; much of the theory carries over to general metric spaces without significant effort (but the development becomes less clean, as this requires certain measurability assumptions in the theorem statements).

Let {(X,d)} be a finite metric space. If {P} is a partition of {X}, and {x \in X}, we will write {P(x)} for the unique set in {P} containing {x}. We say that {P} is {\Delta}-bounded if {\mathrm{diam}(S) \leq \Delta} for all {S \in P}. We will also say that a random partition {\mathcal P} is {{\Delta}}-bounded if it is supported only on {\Delta}-bounded partitions of {X}.

Let’s now look at one way to formalize the idea that the “boundary” of a random partition {\mathcal P} is small.

A random partition {\mathcal P} of {X} is {{L}}-Lipschitz if, for every {x,y \in X},

\displaystyle \mathbb P\left(\mathcal P(x) \neq \mathcal P(y)\right) \leq L \cdot d(x,y).

Intuitively, the boundary is small if nearby points tend to end up in the same piece of the partition. There is a tradeoff between a random partition {\mathcal P} being {\Delta}-bounded and {{L}}-Lipschitz. As {\Delta} increases, we expect that we can make {{L}}, and hence the “boundary effect,” smaller and smaller. The following theorem can be derived from work of Leighton and Rao, or Linial and Saks. The form stated below comes from work of Bartal.

Theorem 1

If {(X,d)} is an {n}-point metric space and {n \geq 2}, then for every {\Delta > 0}, there is an {\frac{O(\log n)}{\Delta}}-Lipschitz, {\Delta}-bounded random partition {\mathcal P} of {X}.

We will prove a more general fact that will be essential later. For positive numbers {a,b}, define {H(a,b) = \sum_{i=a+1}^b \frac{1}{i}} (note that {H(a,a)=0}). The next theorem and proof are from Calinescu, Karloff, and Rabani.

Theorem 2

For every {\Delta > 0}, there is a {\Delta}-bounded random partition {\mathcal P} of {X} which satisfies, for every {x \in X} and {0 < r \leq \Delta/8},

\displaystyle {\mathbb P}\left(B(x,r) \nsubseteq \mathcal P(x)\right) \leq \frac{8r}{\Delta} H\left(|B(x,\Delta/8)|,|B(x,\Delta)|\right). \ \ \ \ \ (1)

Observe that Theorem 1 follows from Theorem 2 by setting {r = d(x,y)}, and noting that {\mathcal P(x) \neq \mathcal P(y)} implies {B(x,r) \nsubseteq \mathcal P(x)}. We also use {H(1,n) = O(\log n)} for {n \geq 2}.

Suppose that {|X|=n}. Let {\alpha \in [\frac14, \frac12]} be chosen uniformly at random, and let {\pi} be a uniformly random bijection {\pi : \{1,2,\ldots,n\} \rightarrow X}. We will think of {\pi} as giving a random ordering of {X}. We define our random partition {\mathcal P} as follows.

For {i \geq 1}, define

\displaystyle C_i = B(\pi(i), \alpha \Delta) \setminus \bigcup_{j=1}^{i-1} C_j.

It is straightforward that {\mathcal P = C_1 \cup C_2 \cup \cdots \cup C_n} is a partition of X, with perhaps many of the sets {C_i} being empty. Furthermore, by construction, {\mathcal P} is {\Delta}-bounded. Thus we are left to verify (1).

To this end, fix {x \in X} and {r > 0}, and enumerate the points of {X = \{y_1, y_2, \ldots, y_n\}} so that {d(x,y_1) \leq d(x,y_2) \leq \cdots \leq d(x,y_n)}. Let {B = B(x,r)}. We will say that a point {y_i} sees {B} if {d(y_i, x) \leq \alpha \Delta + r}, and we will say that {y_i} cuts {B} if {\alpha \Delta - r\leq d(y_i, x) \leq \alpha \Delta + r}.
(In the picture below, y_1 and y_2 see B, while y_3 does not. Only y_2 cuts B.)

Observe that for any {y \in X},

\displaystyle \mathbb P\left(y \textrm{ cuts } B\right) = \mathbb P\left(\alpha \Delta \in [d(x,y)-r,d(x,y)+r]\right) \leq \frac{2r}{\Delta/4} = \frac{8r}{\Delta}.\ \ \ \ \ (2)

Using this language, we can now reveal our analysis strategy: Let {y_{\min}} be the minimal element (according to the ordering {\pi}) which sees {B}. Then {B(x,r) \nsubseteq \mathcal P(x)} can only occur if {y_{\min}} also cuts {B}. The point is that the fate of {B(x,r)} is not decided until some point sees {B}. Then, if {y_{\min}} does not cut {B}, we have {B(x,r) \subseteq C_{\pi^{-1}(y_{\min})}}, hence {B(x,r) \subseteq \mathcal P(x)}.

Thus we can write,

\displaystyle \mathbb P\left(B(x,r) \nsubseteq \mathcal P(x)\right) \leq \mathbb P\left(y_{\min} \textrm{ cuts } B \right) \leq \sum_{i=1}^n \mathbb P\left(y_{\min}=y_i \textrm{ and } y_i \textrm{ cuts } B\right).\ \ \ \ \ (3)

To analyze the latter sum, first note that if {y \notin B(x,\Delta/2+r)}, then {y} can never see {B} since {\alpha \Delta \leq \Delta/2} always. On the other hand, if {y \in B(x,\Delta/4-r)} then {y} always sees {B}, but can never cut {B} since {\alpha \Delta \geq \Delta/4} always.
Recalling that {r \leq \Delta/8} by assumption, and setting {a=|B(x,\Delta/8)|} and let {b=|B(x,\Delta)|}, we can use (3) to write

\displaystyle \mathbb P\left(B(x,r) \nsubseteq \mathcal P(x)\right)\leq \sum_{i=a+1}^b \mathbb P\left(y_{\min}=y_i \textrm{ and } y_i \textrm{ cuts }B\right). \ \ \ \ \ (4)

Now we come to the heart of the analysis: For any {i \geq 1},

\displaystyle \mathbb P\left(y_{\min} = y_i\,|\, y_i \textrm{ cuts } B\right) \leq \frac{1}{i}.\ \ \ \ \ (5)

The idea is that if {y_i} cuts {B}, then {\alpha \Delta \geq d(x,y_i)-r}. Thus if any {y_j} for {j < i} comes before {y_i} in the {\pi}-ordering, then {y_j} also sees {B} since {d(x,y_j) \leq d(x,y_i)}, hence {y_i \neq y_{\min}}. But the probability that {y_i} is the {\pi}-minimal element of {\{y_1, \ldots, y_i\}} is precisely {\frac{1}{i}}, proving (5).

To finish the proof, we combine (2), (4), and (5), yielding

\displaystyle \mathbb P\left(B(x,r) \nsubseteq \mathcal P(x)\right) \leq \sum_{i=a+1}^b \mathbb P\left(y_i \textrm{ cuts } B\right) \cdot \mathbb P\left(y_i = y_{\min}\,|\, y_i \textrm{ cuts } B\right) \leq  \frac{8r}{\Delta} \sum_{i=a+1}^b \frac{1}{i}\,.


Tightness for expanders. Before ending this section, let us mention that Theorem 1 is optimal up to a constant factor. Let {\{G_k\}} be a family of degree-3 expander graphs, equipped with their shortest-path metric {d_k}. Assume that {|E(S, \bar S)| \geq c|S|} for some {c > 0} and all {|S| \leq \frac12 |G_k|}. Let {\Delta_k = \frac{\log |G_k|}{10}}, and suppose that {G_k} admits a {\Delta_k}-bounded {{L}}-Lipschitz random partition. By averaging, this means there is a fixed {\Delta_k}-bounded partition {P} of {G_k} which cuts at most an {{L}}-fraction of edges. But every {S \in P} has {\mathrm{diam}(S) \leq \Delta_k}, which implies {|S| < n/2}. Hence the partition must cut an {\Omega(1)} fraction of edges, implying {L = \Omega(1)}.

In the next post, we’ll see how these random partitions allow us to reduce may questions on finite metric spaces to questions on trees.