In addition to the current sequence on Talagrand’s majorizing measures theory, I’m also going to be putting up a series of lecture notes about embeddings of finite metric spaces. The first few will concern random partitions of metric spaces, and their many applications.
Random partitions
Often when one is confronted with the problem of analyzing some problem on a metric space , a natural way to proceed is by divide and conquer: Break the space into small pieces, do something (perhaps recursively) on each piece, and then glue these local solutions together to achieve a global result. This is certainly an important theme in areas like differential geometry, harmonic analysis, and computational geometry.
In the metric setting, “small” often means pieces of small diameter. Of course we could just break the space into singletons , but this ignores the second important aspect in a “divide and conquer” approach: what goes on at the boundary. If we are going to combine our local solutions together effectively, we want the “boundary” between distinct pieces of the partition to be small. Formalizing this idea in various ways leads to a number of interesting ideas which I’ll explore in a sequence of posts.
Example: The unit square
Consider the unit square in the plane, equipped with the
metric. If we break the square into pieces of side length
, then every piece has diameter at most
, and a good fraction of the space is far from the boundary of the partition. In fact, it is easy to see that e.g. 60% of the measure is at least distance
from the boundary (the red dotted line).
But there is a problem with using this as a motivating example. First, observe that we had both a metric structure (the distance) and a measure (in this case, the uniform measure on
). In many potential applications, there is not necessarily a natural measure to work with (e.g. for finite metric spaces, where counting the number of points is often a poor way of measuring the “size” of various pieces).
To escape this apparent conundrum, note that the uniform measure is really just a distraction: The same result holds for any measure on . This is a simple application of the probabilistic method: If we uniformly shift the
-grid at random, then for any point
, we have
Thus, by averaging, for any measure on
there exists a partition where 60% of the
-measure is
-far from the boundary. This leads us to the idea that, in many situations, instead of talking about measures, it is better to talk about distributions over partitions. In this case, we want the “boundary” to be small on “average.”
Lipschitz random partitions
We will work primarily with finite metric spaces to avoid having to deal with continuous probability spaces; much of the theory carries over to general metric spaces without significant effort (but the development becomes less clean, as this requires certain measurability assumptions in the theorem statements).
Let be a finite metric space. If
is a partition of
, and
, we will write
for the unique set in
containing
. We say that
is
-bounded if
for all
. We will also say that a random partition
is
-bounded if it is supported only on
-bounded partitions of
.
Let’s now look at one way to formalize the idea that the “boundary” of a random partition is small.
A random partition of
is
-Lipschitz if, for every
,
Intuitively, the boundary is small if nearby points tend to end up in the same piece of the partition. There is a tradeoff between a random partition being
-bounded and
-Lipschitz. As
increases, we expect that we can make
, and hence the “boundary effect,” smaller and smaller. The following theorem can be derived from work of Leighton and Rao, or Linial and Saks. The form stated below comes from work of Bartal.
Theorem 1
Ifis an
-point metric space and
, then for every
, there is an
-Lipschitz,
-bounded random partition
of
.
We will prove a more general fact that will be essential later. For positive numbers , define
(note that
). The next theorem and proof are from Calinescu, Karloff, and Rabani.
Theorem 2
For every, there is a
-bounded random partition
of
which satisfies, for every
and
,
![]()
Observe that Theorem 1 follows from Theorem 2 by setting , and noting that
implies
. We also use
for
.
Proof:
Suppose that . Let
be chosen uniformly at random, and let
be a uniformly random bijection
. We will think of
as giving a random ordering of
. We define our random partition
as follows.
For , define
It is straightforward that is a partition of
, with perhaps many of the sets
being empty. Furthermore, by construction,
is
-bounded. Thus we are left to verify (1).
To this end, fix and
, and enumerate the points of
so that
. Let
. We will say that a point
sees
if
, and we will say that
cuts
if
.
(In the picture below, and
see
, while
does not. Only
cuts
.)
Using this language, we can now reveal our analysis strategy: Let be the minimal element (according to the ordering
) which sees
. Then
can only occur if
also cuts
. The point is that the fate of
is not decided until some point sees
. Then, if
does not cut
, we have
, hence
.
To analyze the latter sum, first note that if , then
can never see
since
always. On the other hand, if
then
always sees
, but can never cut
since
always.
Recalling that by assumption, and setting
and let
, we can use (3) to write
Now we come to the heart of the analysis: For any ,
The idea is that if cuts
, then
. Thus if any
for
comes before
in the
-ordering, then
also sees
since
, hence
. But the probability that
is the
-minimal element of
is precisely
, proving (5).
To finish the proof, we combine (2), (4), and (5), yielding
Tightness for expanders. Before ending this section, let us mention that Theorem 1 is optimal up to a constant factor. Let be a family of degree-3 expander graphs, equipped with their shortest-path metric
. Assume that
for some
and all
. Let
, and suppose that
admits a
-bounded
-Lipschitz random partition. By averaging, this means there is a fixed
-bounded partition
of
which cuts at most an
-fraction of edges. But every
has
, which implies
. Hence the partition must cut an
fraction of edges, implying
.
In the next post, we’ll see how these random partitions allow us to reduce may questions on finite metric spaces to questions on trees.
Maybe there is a typo : Should the condition of seeing be : $d(y_i,x) \leq \alpha\triangle + r$ and at least from the figure ($\alpha \triangle \geq r$) the condition for cutting appears to be
$\alpha \triangle – r \leq d(y_i,x) \leq \alpha \triangle + r$.
[Yes, thanks! Fixed. -j]
I wonder if fixing the previous typo might have broken a part of the proof. The proof says:
“if {y \notin B(x,\Delta/2)}, then {y} can never see {B} since {\alpha \Delta \leq \Delta/2} always.”
This seems not quite right. If {y \notin B(x,\Delta/2+r)} then I agree that {y} can never see {B}.
Also, the proof says:
“if {y \in B(x,\Delta/4)} then {y} always sees {B}, but can never cut {B} since {\alpha \Delta \geq \Delta/4} always”
This too seems not quite right. If {y \in B(x,\Delta/4-r)} then I agree that {y} always sees {B} but can never cut {B}.
OK, this is finally fixed. I also fixed the tree embedding analysis based on the updated theorem statement.
In the example of the unit square $$[0,1]^2$$. Let S be the set of all the points which distance at least $$\frac{\delta}{10}$$ from the boundary. Then S is a square with side $$\frac{\delta}{2}-\frac{\delta}{5}=\frac{3\delta}{10}$$. Thus, $$vol(S)=\frac{9\delta^2}{100}$$ which is only 36% of $$vol([0,1]^2)=\frac{\delta^2}{4}$$. I am sorry but I don’t see why vol(S) is at least 60% of that of [0,1]^2. Thank you.
[Fixed by changing it to
. –j]