# Follmer’s drift, Ito’s lemma, and the log-Sobolev inequality

1. Construction of Föllmer’s drift

In a previous post, we saw how an entropy-optimal drift process could be used to prove the Brascamp-Lieb inequalities. Our main tool was a result of Föllmer that we now recall and justify. Afterward, we will use it to prove the Gaussian log-Sobolev inequality.

Consider ${f : \mathbb R^n \rightarrow \mathbb R_+}$ with ${\int f \,d\gamma_n = 1}$, where ${\gamma_n}$ is the standard Gaussian measure on ${\mathbb R^n}$. Let ${\{B_t\}}$ denote an ${n}$-dimensional Brownian motion with ${B_0=0}$. We consider all processes of the form

$\displaystyle W_t = B_t + \int_0^t v_s\,ds\,, \ \ \ \ \ (1)$

where ${\{v_s\}}$ is a progressively measurable drift and such that ${W_1}$ has law ${f\,d\gamma_n}$.

Theorem 1 (Föllmer) It holds that

$\displaystyle D(f d\gamma_n \,\|\, d\gamma_n) = \min D(W_{[0,1]} \,\|\, B_{[0,1]}) = \min \frac12 \int_0^1 \mathop{\mathbb E}\,\|v_t\|^2\,dt\,,$

where the minima are over all processes of the form (1).

Proof: In the preceding post (Lemma 2), we have already seen that for any drift of the form (1), it holds that

$\displaystyle D(f d\gamma_n \,\|\,d\gamma_n) \leq \frac12 \int_0^1 \mathop{\mathbb E}\,\|v_t\|^2\,dt = D(W_{[0,1]} \,\|\, B_{[0,1]})\,,$

thus we need only exhibit a drift ${\{v_t\}}$ achieving equality.

We define

$\displaystyle v_t = \nabla \log P_{1-t} f(W_t) = \frac{\nabla P_{1-t} f(W_t)}{P_{1-t} f(W_t)}\,,$

where ${\{P_t\}}$ is the Brownian semigroup defined by

$\displaystyle P_t f(x) = \mathop{\mathbb E}[f(x + B_t)]\,.$

As we saw in the previous post (Lemma 2), the chain rule yields

$\displaystyle D(W_{[0,1]} \,\|\, B_{[0,1]}) = \frac12 \int_0^1 \mathop{\mathbb E}\,\|v_t\|^2\,dt\,. \ \ \ \ \ (2)$

We are left to show that ${W_1}$ has law ${f \,d\gamma_n}$ and ${D(W_{[0,1]} \,\|\, B_{[0,1]}) = D(f d\gamma_n \,\|\,d\gamma_n)}$.

We will prove the first fact using Girsanov’s theorem to argue about the change of measure between ${\{W_t\}}$ and ${\{B_t\}}$. As in the previous post, we will argue somewhat informally using the heuristic that the law of ${dB_t}$ is a Gaussian random variable in ${\mathbb R^n}$ with covariance ${dt \cdot I}$. Itô’s formula states that this heuristic is justified (see our use of the formula below).

The following lemma says that, given any sample path ${\{W_s : s \in [0,t]\}}$ of our process up to time ${s}$, the probability that Brownian motion (without drift) would have “done the same thing” is ${\frac{1}{M_t}}$.

Remark 1 I chose to present various steps in the next proof at varying levels of formality. The arguments have the same structure as corresponding formal proofs, but I thought (perhaps naïvely) that this would be instructive.

Lemma 2 Let ${\mu_t}$ denote the law of ${\{W_s : s \in [0,t]\}}$. If we define

$\displaystyle M_t = \exp\left(-\int_0^t \langle v_s,dB_s\rangle - \frac12 \int_0^t \|v_s\|^2\,ds\right)\,,$

then under the measure ${\nu_t}$ given by

$\displaystyle d\nu_t = M_t \,d\mu_t\,,$

the process ${\{W_s : s \in [0,t]\}}$ has the same law as ${\{B_s : s \in [0,t]\}}$.

Proof: We argue by analogy with the discrete proof. First, let us define the infinitesimal “transition kernel” of Brownian motion using our heuristic that ${dB_t}$ has covariance ${dt \cdot I}$:

$\displaystyle p(x,y) = \frac{e^{-\|x-y\|^2/2dt}}{(2\pi dt)^{n/2}}\,.$

We can also compute the (time-inhomogeneous) transition kernel ${q_t}$ of ${\{W_t\}}$:

$\displaystyle q_t(x,y) = \frac{e^{-\|v_t dt + x - y\|^2/2dt}}{(2\pi dt)^{n/2}} = p(x,y) e^{-\frac12 \|v_t\|^2 dt} e^{-\langle v_t, x-y\rangle}\,.$

Here we are using that ${dW_t = dB_t + v_t\,dt}$ and ${v_t}$ is deterministic conditioned on the past, thus the law of ${dW_t}$ is a normal with mean ${v_t\,dt}$ and covariance ${dt \cdot I}$.

To avoid confusion of derivatives, let’s use ${\alpha_t}$ for the density of ${\mu_t}$ and ${\beta_t}$ for the density of Brownian motion (recall that these are densities on paths). Now let us relate the density ${\alpha_{t+dt}}$ to the density ${\alpha_{t}}$. We use here the notations ${\{\hat W_t, \hat v_t, \hat B_t\}}$ to denote a (non-random) sample path of ${\{W_t\}}$:

$\displaystyle \begin{array}{lll} \alpha_{t+dt}(\hat W_{[0,t+dt]}) &= \alpha_t(\hat W_{[0,t]}) q_t(\hat W_t, \hat W_{t+dt}) \\ &= \alpha_t(\hat W_{[0,t]}) p(\hat W_t, \hat W_{t+dt}) e^{-\frac12 \|\hat v_t\|^2\,dt-\langle \hat v_t,\hat W_t-\hat W_{t+dt}\rangle} \\ &= \alpha_t(\hat W_{[0,t]}) p(\hat W_t, \hat W_{t+dt}) e^{-\frac12 \|\hat v_t\|^2\,dt+\langle \hat v_t,d \hat W_t\rangle} \\ &= \alpha_t(\hat W_{[0,t]}) p(\hat W_t, \hat W_{t+dt}) e^{\frac12 \|\hat v_t\|^2\,dt+\langle \hat v_t, d \hat B_t\rangle}\,, \end{array}$

where the last line uses ${d\hat W_t = d\hat B_t + \hat v_t\,dt}$.

Now by “heuristic” induction, we can assume ${\alpha_t(\hat W_{[0,t]})=\frac{1}{M_t} \beta_t(\hat W_{[0,t]})}$, yielding

$\displaystyle \begin{array}{lll} \alpha_{t+dt}(\hat W_{[0,t+dt]}) &= \frac{1}{M_t} \beta_t(\hat W_{[0,t]}) p(\hat W_t, \hat W_{t+dt}) e^{\frac12 \|\hat v_t\|^2\,dt+\langle \hat v_t, d \hat B_t\rangle} \\ &= \frac{1}{M_{t+dt}} \beta_t(\hat W_{[0,t]}) p(\hat W_t, \hat W_{t+dt}) \\ &= \frac{1}{M_{t+dt}} \beta_{t+dt}(\hat W_{[0,t+dt]})\,. \end{array}$

In the last line, we used the fact that ${p}$ is the infinitesimal transition kernel for Brownian motion. $\Box$

Now we will show that

$\displaystyle P_{1-t} f(W_t) = \exp\left(\frac12 \int_0^t \|v_s\|^2\,ds + \int_0^t \langle v_s, dB_s\rangle\right) = \frac{1}{M_t}\,. \ \ \ \ \ (3)$

From Lemma 2, it will follow that ${W_t}$ has the law ${(P_{1-t} f)\cdot d\nu_t}$ where ${d\nu_t}$ is the law of ${B_t}$. In particular, ${W_1}$ has the law ${f\,d\nu_1 = f\,d\gamma_n}$ which was our first goal.

Given our preceding less formal arguments, let us use a proper stochastic calculus argument to establish (3). To do that we need a way to calculate

$\displaystyle d \log P_{1-t} f(W_t) \quad \textrm{}= \log P_{1-t-dt} f(W_{t+dt}) - \log P_{1-t} f(W_t)\textrm{''} \ \ \ \ \ (4)$

Notice that this involves both time and space derivatives.

Itô’s lemma. Suppose we have a continuously differentiable function ${F : \mathbb R \times [0,1] \rightarrow \mathbb R}$ that we write as ${F(x,t)}$ where ${x}$ is a space variable and ${t}$ is a time variable. We can expand ${d F}$ via its Taylor series:

$\displaystyle d F = \partial_t F \,dt + \partial_x F\,dx + \frac12 \partial_x^2 F\,dx^2 + \frac12 \partial_x \partial_t F\,dx\,dt + \cdots\,.$

Normally we could eliminate the terms ${dx^2, dx\, dt}$, etc. since they are lower order as ${dx,dt \rightarrow 0}$. But recall that for Brownian motion we have the heuristic ${\mathop{\mathbb E}[dB_t^2]=dt}$. Thus we cannot eliminate the second-order space derivative if we plan to plug in ${x=B_t}$ (or ${x=W_t}$, a process driven by Brownian motion). Itô’s lemma says that this consideration alone gives us the correct result:

$\displaystyle d F(W_t,t) = \partial_t F(W_t,t)\,dt + \partial_x F(W_t,t)\,dW_t + \frac12 \partial_x^2 F(W_t,t)\,dt\,.$

This generalizes in a straightforward way to the higher dimensional setting ${F : \mathbb R^n \times [0,1] \rightarrow \mathbb R}$.

With Itô’s lemma in hand, let us continue to calculate the derivative

$\displaystyle \begin{array}{lll} d P_{1-t} f(W_t) &= - \Delta P_{1-t} f(W_t)\,dt + \langle \nabla P_{1-t} f(W_t), dW_t\rangle + \Delta P_{1-t} f(W_t) \,dt \\ &= \langle \nabla P_{1-t} f(W_t), dW_t\rangle \\ &= P_{1-t} f(W_t) \,\langle v_t, dW_t\rangle\,. \end{array}$

For the time derivative (the first term), we have employed the heat equation

$\displaystyle \partial_t P_{1-t} f = - \Delta P_{1-t} f\,,$

where ${\Delta = \frac12 \sum_{i=1}^n \partial_{x_i}^2}$ is the Laplacian on ${\mathbb R^n}$.

Note that the heat equation was already contained in our “infinitesimal density” ${p}$ in the proof of Lemma 2, or in the representation ${P_t = e^{t \Delta}}$, and Itô’s lemma was also contained in our heuristic that ${dB_t}$ has covariance ${dt \cdot I}$.

Using Itô’s formula again yields

$\displaystyle d \log P_{1-t} f(W_t) = \langle v_t, dW_t\rangle - \frac12 \|v_t\|^2\,dt = \frac12 \|v_t\|^2\,dt + \langle v_t,dB_t\rangle\,.$

giving our desired conclusion (3).

Our final task is to establish optimality: ${D\left(W_{[0,1]} \,\|\, B_{[0,1]}\right) = D(W_1\,\|\,B_1)}$. We apply the formula (3):

$\displaystyle D(W_1\,\|\,B_1) = \mathop{\mathbb E}[\log f(W_1)] = \mathop{\mathbb E}\left[\frac12 \int_0^1 \|v_t\|^2\,dt\right],$

where we used ${\mathop{\mathbb E}[\langle v_t,dB_t\rangle]=0}$. Combined with (2), this completes the proof of the theorem. $\Box$

2. The Gaussian log-Sobolev inequality

Consider again a measurable ${f : \mathbb R^n \rightarrow \mathbb R_+}$ with ${\int f\,d\gamma_n=1}$. Let us define ${\mathrm{Ent}_{\gamma_n}(f) = D(f\,d\gamma_n \,\|\,d\gamma_n)}$. Then the classical log-Sobolev inequality in Gaussian space asserts that

$\displaystyle \mathrm{Ent}_{\gamma_n}(f) \leq \frac12 \int \frac{\|\nabla f\|^2}{f}\,d\gamma_n\,. \ \ \ \ \ (5)$

First, we discuss the correct way to interpret this. Define the Ornstein-Uhlenbeck semi-group ${\{U_t\}}$ by its action

$\displaystyle U_t f(x) = \mathop{\mathbb E}[f(e^{-t} x + \sqrt{1-e^{-2t}} B_1)]\,.$

This is the natural stationary diffusion process on Gaussian space. For every measurable ${f}$, we have

$\displaystyle U_t f \rightarrow \int f d\gamma_n \quad \textrm{ as } t \to \infty$

or equivalently

$\displaystyle \mathrm{Ent}_{\gamma_n}(U_t f) \rightarrow 0 \quad \textrm{ as } t \to \infty$

The log-Sobolev inequality yields quantitative convergence in the relative entropy distance as follows: Define the Fisher information

$\displaystyle I(f) = \int \frac{\|\nabla f\|^2}{f} \,d\gamma_n\,.$

One can check that

$\displaystyle \frac{d}{dt} \mathrm{Ent}_{\gamma_n} (U_t f)\Big|_{t=0} = - I(f)\,,$

thus the Fisher information describes the instantaneous decay of the relative entropy of ${f}$ under diffusion.

So we can rewrite the log-Sobolev inequality as:

$\displaystyle - \frac{d}{dt} \mathrm{Ent}_{\gamma_n}(U_t f)\Big|_{t=0} \geq 2 \mathrm{Ent}_{\gamma_n}(f)\,.$

This expresses the intuitive fact that when the relative entropy is large, its rate of decay toward equilibrium is faster.

Martingale property of the optimal drift. Now for the proof of (5). Let ${dW_t = dB_t + v_t\,dt}$ be the entropy-optimal process with ${W_1 \sim f \,d\gamma_n}$. We need one more fact about ${\{v_t\}}$: The optimal drift is a martingale, i.e. ${\mathop{\mathbb E}[v_t \mid v_s] = v_s}$ for ${s < t}$.

Let’s give two arguments to support this.

Argument one: Brownian bridges. First, note that by the chain rule for relative entropy, we have:

$\displaystyle D(W_{[0,1]} \,\|\, B_{[0,1]}) = D(W_1 \,\|\, B_1) + \int D(W_{[0,1]} \,\|\, B_{[0,1]} \mid W_1=B_1=x) f(x) d\gamma_n(x)\,.$

But from optimality, we know that the latter expectation is zero. Therefore ${f \,d\gamma_n}$-almost surely, we have

$\displaystyle D(W_{[0,1]} \,\| B_{[0,1]} \mid W_1=B_1=x) = 0\,.$

This implies that if we condition on the endpoint ${x}$, then ${W_{[0,1]}}$ is a Brownian bridge (i.e., a Brownian motion conditioned to start at ${0}$ and end at ${x}$).

This implies that ${\mathop{\mathbb E}[v_t \mid v_s, W_1=x] = v_s}$, as one can check that a Brownian bridge ${\{\hat B_t\}}$ with endpoint ${x}$ is described by the drift process ${d\hat B_t = dB_t + \frac{x-\hat B_t}{1-t}\,dt}$, and

$\displaystyle \mathop{\mathbb E}\left[\frac{x-\hat B_t}{1-t} \,\Big|\, B_{[0,s]}\right] = \frac{x-\hat B_s}{1-s}\,.$

That seemed complicated. There is a simpler way to see this: Given ${\hat B_s}$ and any bridge ${\gamma}$ from ${\hat B_s}$ to ${x}$, every “permutation” of the infinitesimal steps in ${\gamma}$ has the same law (by commutativity, they all land at ${x}$). Thus the marginal law of ${dB_t + v_t\,dt}$ at every point ${t \geq s}$ should be the same. In particular,

$\displaystyle \mathop{\mathbb E}[v_t\,dt \mid v_s] = \mathop{\mathbb E}[dB_t + v_t\,dt \mid v_s] = \mathop{\mathbb E}[dB_s + v_s \,ds \mid v_s] = v_s\,ds\,.$

Argument two: Change of measure. There is a more succinct (though perhaps more opaque) way to see that ${\{v_t\}}$ is a martingale. Note that the process ${\nabla P_{1-t} f(B_t) = P_{1-t} \nabla f(B_t)}$ is a Doob martingale. But we have ${v_t = \frac{\nabla P_{1-t} f(W_t)}{P_{1-t} f(W_t)}}$ and we also know that ${\frac{1}{P_{1-t} f(W_t)} = \frac{1}{M_t}}$ is precisely the change of measure that makes ${\{W_t\}}$ into Brownian motion.

Proof of the log-Sobolev inequality. In any case, now we are ready for the proof of (5). It also comes straight from Lehec’s paper. Since ${\{v_t\}}$ is a martingale, we have ${\mathop{\mathbb E}\,\|v_t\|^2 \leq \mathop{\mathbb E}\,\|v_1\|^2}$. So by Theorem 1:

$\displaystyle \mathrm{Ent}_{\gamma_n}(f) = \frac12 \int_0^1 \mathop{\mathbb E}\,\|v_t\|^2\,dt \leq \frac12 \mathop{\mathbb E}\,\|v_1\|^2 = \frac12 \mathop{\mathbb E}\, \frac{\|\nabla f(W_1)\|^2}{f(W_1)^2} = \frac12 \mathop{\mathbb E}\, \frac{\|\nabla f(B_1)\|^2}{f(B_1)}\,.$

The latter quantity is $\frac12 I(f)$. In the last equality, we used the fact that ${\frac{1}{f(W_1)}}$ is precisely the change of measure that turns ${\{W_t\}}$ into Brownian motion.

# Entropy optimality on path space

After Boaz posted on the mother of all inequalities, it seemed about the right time to get around to the next series of posts on entropy optimality. The approach is the same as before, but now we consider entropy optimality on a path space. After finding an appropriate entropy-maximizer, the Brascamp-Lieb inequality will admit a gorgeous one-line proof. Our argument is taken from the beautiful paper of Lehec.

For simplicity, we start first with an entropy optimization on a discrete path space. Then we move on to Brownian motion.

1.1 Entropy optimality on discrete path spaces

Consider a finite state space ${\Omega}$ and a transition kernel ${p : \Omega \times \Omega \rightarrow [0,1]}$. Also fix some time ${T \geq 0}$.

Let ${\mathcal P_T}$ denote the space of all paths ${\gamma : \{0,1,\ldots,T\} \rightarrow \Omega}$. There is a natural measure ${\mu_{\mathcal P}}$ on ${\mathcal P_T}$ coming from the transition kernel:

$\displaystyle \mu_{\mathcal P}(\gamma) = \prod_{t=0}^{T-1} p\left(\gamma(t), \gamma(t+1)\right)\,.$

Now suppose we are given a starting point ${x_0 \in \Omega}$, and a target distribution specified by a function ${f : \Omega \rightarrow {\mathbb R}_+}$ scaled so that ${\mathop{\mathbb E}[f(X_T) \mid X_0 = x_0]=1}$. If we let ${\nu_T}$ denote the law of ${X_T \mid X_0 = x_0}$, then this simply says that ${f}$ is a density with respect to ${\nu_T}$. One should think about ${\nu_T}$ as the natural law at time ${T}$ (given ${X_0=x_0}$), and ${f \nu_T}$ describes a perturbation of this law.

Let us finally define the set ${\mathcal M_T(f; x_0)}$ of all measures ${\mu}$ on ${\mathcal P_T}$ that start at ${x_0}$ and end at ${f \nu_T}$, i.e. those measures satisfying

$\displaystyle \mu\left(\{\gamma : \gamma(0)=x_0\}\right) = 1\,,$

and for every ${x \in \Omega}$,

$\displaystyle f(x) \nu_T(x) = \sum_{\gamma \in \mathcal P : \gamma(T)=x} \mu(\gamma)\,.$

Now we can consider the entropy optimization problem:

$\displaystyle \min \left\{ D(\mu \,\|\, \mu_{\mathcal P}) : \mu \in \mathcal M_T(f;x_0) \right\}\,. \ \ \ \ \ (1)$

One should verify that, like many times before, we are minimizing the relative entropy over a polytope.

One can think of the optimization as simply computing the most likely way for a mass of particles sitting at ${x_0}$ to end up in the distribution ${f \nu_T}$ at time ${T}$.

The optimal solution ${\mu^*}$ exists and is unique. Moreover, we can describe it explicitly: ${\mu^*}$ is given by a time-inhomogeneous Markov chain. For ${0 \leq t \leq T-1}$, this chain has transition kernel

$\displaystyle q_t(x,y) = p(x,y) \frac{H_{T-t-1} f(y)}{H_{T-t} f(x)}\,, \ \ \ \ \ (2)$

where ${H_t}$ is the heat semigroup of our chain ${\{X_t\}}$, i.e.

$\displaystyle H_t f(x) = \mathop{\mathbb E}[f(X_t) \mid X_0 = x]\,.$

Let ${\{W_t\}}$ denote the time-inhomogeneous chain with transition kernels ${\{q_t\}}$ and ${W_0=x_0}$ and let ${\mu}$ denote the law of the random path ${\{W_0, \ldots, W_T\}}$. We will now verify that ${\mu}$ is the optimal solution to (1).

We first need to confirm that ${\mu \in \mathcal M_T(f;x_0)}$, i.e. that ${W_T}$ has law ${f \nu_T}$. To this end, we will verify inductively that ${W_t}$ has law ${(H_{T-t} f)\cdot \nu_t}$. For ${t=0}$, this follows by definition. For the inductive step:

$\displaystyle \begin{array}{lll} \displaystyle\mathop{\mathbb P}[W_{t+1}=y] &= \sum_{x \in \Omega} \Pr[W_t=x] \cdot p(x,y) \frac{H_{T-t-1} f(y)}{H_{T-t} f(x)} \\ \displaystyle&= \sum_{x \in \Omega} H_{T-t} f(x) \nu_t(x) p(x,y) \frac{H_{T-t-1} f(y)}{H_{T-t} f(x)} \\ \displaystyle&= \sum_{x \in \Omega} \nu_t(x) p(x,y) H_{T-t-1}f(y) \\ \displaystyle & = H_{T-t-1} f(y) \nu_{t+1}(y)\,. \end{array}$

We have confirmed that ${\mu \in \mathcal M_T(f;x_0)}$. Let us now verify its optimality by writing

$\displaystyle D(f \nu_T \,\|\,\nu_T) = \mathop{\mathbb E}_{\nu_T} [f \log f] = \mathop{\mathbb E}[\log f(W_T)]\,,$

where the final equality uses the fact we just proved: ${W_T}$ has law ${f \nu_T}$. Continuing, we have

$\displaystyle \mathop{\mathbb E}[\log f(W_T)] = \sum_{t=0}^{T-1} \mathop{\mathbb E}\left[\log \frac{H_{T-t-1} f(W_{t+1})}{H_{T-t} f(W_t)}\right] = \sum_{t=0}^{T-1} \mathop{\mathbb E} \left[D(q_t(W_t, \cdot) \,\|\, p(W_t,\cdot))\right]\,,$

where the final inequality uses the definition of ${q_t}$ in (2). The latter quantity is precisely ${D(\mu \,\|\, \mu_{\mathcal P})}$ by the chain rule for relative entropy.

Exercise: One should check that if ${\{A_t\}}$ and ${\{B_t\}}$ are two time-inhomogeneous Markov chains on ${\Omega}$ with respective transition kernels ${a_t}$ and ${b_t}$ then indeed the chain rule for relative entropy yields

$\displaystyle D(\{A_0, \ldots, A_T\} \,\|\, \{B_0, \ldots, B_T\}) = \sum_{t=0}^{T-1} \mathop{\mathbb E}\left[D\left(a_t(A_t, \cdot)\,\|\,b_t(A_t,\cdot)\right)\right]\,. \ \ \ \ \ (3)$

We conclude that

$\displaystyle D(f \nu_T \,\|\, \nu_T) = D(\mu \,\|\,\mu_{\mathcal P})\,,$

and from this one immediately concludes that ${\mu=\mu^*}$. Indeed, for any measure ${\mu' \in \mathcal M_T(f;x_0)}$, we must have ${D(\mu' \,\|\,\mu_{\mathcal P}) \geq D(f \nu_T \,\|\,\nu_T)}$. This follows because ${f \nu_T}$ is the law of the endpoint of a path drawn from ${\mu'}$ and ${\nu_T}$ is the law of the endpoint of a path drawn from ${\mu}$. The relative entropy between the endpoints is certainly less than along the entire path. (This intuitive fact can again be proved via the chain rule for relative entropy by conditioning on the endpoint of the path.)

1.2. The Brownian version

Let us now do the same thing for processes driven by Brownian motion in ${\mathbb R^n}$. Let ${\{B_t : t \in [0,1]\}}$ be a Brownian motion with ${B_0=0}$. Let ${\gamma_n}$ be the standard Gaussian measure and recall that ${B_1}$ has law ${\gamma_n}$.

We recall that if we have two measures ${\mu}$ and ${\nu}$ on ${\mathbb R^n}$ such that ${\nu}$ is absolutely continuous with respect to ${\mu}$, we define the relative entropy

$\displaystyle D(\nu\,\|\,\mu) = \int d\nu \log \frac{d\nu}{d\mu}$

Our “path space” will consist of drift processes ${\{W_t : t \in [0,1]\}}$ of the form

$\displaystyle W_t = B_t + \int_0^t u_s\,ds\,, \ \ \ \ \ (4)$

where ${\{u_s\}}$ denotes the drift. We require that ${\{u_s\}}$ is progressively measurable, i.e. that the law of ${u_s}$ is determined by the past up to time ${s}$, and that ${\mathop{\mathbb E} \int_0^1 \|u_s\|^2 \,ds < \infty}$. Note that we can write such a process in differential notation as

$\displaystyle dW_t = dB_t + u_t\,dt\,,$

with ${W_0=0}$.

Fix a smooth density ${f : \mathbb R^n \rightarrow {\mathbb R}_+}$ with ${\int f \,d\gamma_n =1}$. In analogy with the discrete setting, let us use ${\mathcal M(f)}$ to denote the set of processes ${\{W_t\}}$ that can be realized in the form (4) and such that ${W_0 = 0}$ and ${W_1}$ has law ${f d\gamma_n}$.

Let us also use the shorthand ${W_{[0,1]} = \{W_t : t\in [0,1]\}}$ to represent the entire path of the process. Again, we will consider the entropy optimization problem:

$\displaystyle \min \left\{ \vphantom{\bigoplus} D\left(W_{[0,1]} \,\|\, B_{[0,1]}\right) : W_{[0,1]} \in \mathcal M(f) \right\}\,. \ \ \ \ \ (5)$

As in the discrete setting, this problem has a unique optimal solution (in the sense of stochastic processes). Here is the main result.

Theorem 1 (Föllmer) If ${\{ W_t = B_t + \int_0^t u_s\,ds : t \in [0,1]\}}$ is the optimal solution to (5), then

$\displaystyle D\left(W_{[0,1]}\,\|\,B_{[0,1]}\right) = D(W_1 \,\|\, B_1) = \frac12 \int_0^1 \mathop{\mathbb E}\,\|u_t\|^2\,dt\,.$

Just as for the discrete case, one should think of this as asserting that the optimal process only uses as much entropy as is needed for the difference in laws at the endpoint. The RHS should be thought of as an integral over the expected relative entropy generated at time ${t}$ (just as in the chain rule expression (3)).

The reason for the quadratic term is the usual relative entropy approximation for infinitesimal perturbations. For instance, consider the relative entropy between a binary random variable with expected value ${\tfrac12 (1-\varepsilon)}$ and a binary random variable with expected value ${\tfrac12}$:

$\displaystyle \frac12(1-\varepsilon) \log (1-\varepsilon) + \frac12 (1+\varepsilon) \log (1+\varepsilon) \approx \frac12 \varepsilon^2\,.$

I am going to delay the proof of Theorem 1 to the next post because doing it in an elementary way will require some discussion of Ito calculus. For now, let us prove the following.

Lemma 2 For any process ${W_{[0,1]} \in \mathcal M(f)}$ given by a drift ${\{u_t : t\in[0,1]\}}$, it holds that

$\displaystyle D(W_1 \,\|\, B_1) \leq D(W_{[0,1]} \,\|\, B_{[0,1]}) =\frac12 \int_0^1 \mathop{\mathbb E}\,\|u_t\|^2\,dt\,.$

Proof: The proof will be somewhat informal. It can be done easily using Girsanov’s theorem, but we try to keep the presentation here elementary and in correspondence with the discrete version above.

Let us first use the chain rule for relative entropy to calculate

$\displaystyle D\left(W_{[0,1]} \,\|\,B_{[0,1]}\right) = \int_0^1 \mathop{\mathbb E}\left[D( dW_t \,\|\, dB_t)\right] = \int_0^1 \mathop{\mathbb E}\left[D(dB_t + u_t\,dt \,\|\,dB_t)\right]\,. \ \ \ \ \ (6)$

Note that ${dB_t}$ has the law of a standard ${n}$-dimensional of covariance ${dt \cdot I}$.

If ${Z}$ is an ${n}$-dimensional Gaussian with covariance ${\sigma^2 \cdot I}$ and ${u \in \mathbb R^n}$, then

$\displaystyle \begin{array}{lll} D(Z + u \,\|\, Z) &= \mathop{\mathbb E}\left[\log \frac{e^{-\|Z\|^2/2\sigma^2}}{e^{-\|u-Z\|^2/2\sigma^2}}\right] \\ &= \mathop{\mathbb E}\left[\frac{\|u\|^2}{2\sigma^2} + \frac{\langle u,Z\rangle}{\sigma^2}\right] \\ &= \frac{\|u\|^2}{2\sigma^2}\,. \end{array}$

Therefore:

$\displaystyle D(dB_t + u_t\,dt \,\|\,dB_t) = \mathop{\mathbb E} \left[\frac{\|u_t\|^2 dt^2}{2 dt}\mid \mathcal F_t\right] =\frac12 \mathop{\mathbb E}\left[\|u_t\|^2\,dt \mid \mathcal F_t\right]\,,$

where the latter expectation is understood to be conditioned on the past $\mathcal F_t$ up to time ${t}$.

In particular, plugging this into (6), we have

$\displaystyle D\left(W_{[0,1]} \,\|\,B_{[0,1]}\right) = \frac12 \int_0^1 \mathop{\mathbb E}\,\|u_t\|^2\,dt\,. \ \ \ \ \ (7)$

$\Box$

1.3. Brascamp-Lieb

The proof is taken directly from Lehec. We will use the entropic formulation of Brascamp-Lieb due to Carlen and Cordero-Erausquin.

Let ${E}$ be a Euclidean space with subspaces ${E_1, E_2, \ldots, E_m}$. Let ${P_i}$ denote the orthogonal projection onto ${E_i}$. Now suppose that for positive numbers ${c_1, c_2, \ldots, c_m > 0}$, we have

$\displaystyle \sum_{i=1}^m c_i P_i = \mathrm{id}_E\,. \ \ \ \ \ (8)$

By (8), we have for all ${x \in E}$:

$\displaystyle \|x\|^2 = \left\langle x,\sum_{i=1}^m c_i P_i x\right\rangle = \sum_{i=1}^m c_i\|P_i x\|^2\,.$

The latter equality uses the fact that each ${P_i}$ is an orthogonal projection.

Let ${Z}$ denote a standard Gaussian on ${E}$, and let ${Z_i}$ denote a standard Gaussian on ${E_i}$ for each ${i=1,2,\ldots, m}$.

Theorem 3 (Carlen & Cordero-Erausquin version of Brascamp-Lieb) For any random vector ${X \in E}$, it holds that

$\displaystyle D(X \,\|\, Z) \geq \sum_{i=1}^m c_i D(P_i X \,\|\, Z_i)\,.$

Proof: Let ${\{W_t : t \in [0,1]\}}$ with ${dW_t = dB_t + v_t\,dt}$ denote the entropy-optimal drift process such that ${W_1}$ has the law of ${X}$. Then by Theorem 1,

$\displaystyle D(X\,\|\,Z) = \frac12 \int_0^1 \mathop{\mathbb E}\,\|v_t\|^2\,dt = \frac12 \int_0^1 \sum_{i=1}^m c_i \mathop{\mathbb E}\,\|P_i v_t\|^2\,dt \geq \sum_{i=1}^m c_i D(P_i X \,\|\, Z_i)\,,$

where the latter inequality uses Lemma 2 and the fact that ${P_i W_1}$ has law ${P_i X}$. $\Box$

# Lecture notes for the Summer School on Combinatorial Optimization at HIM

As part of a summer school associated to the Hausdorff Institute for Mathematics Program on Combinatorial Optimization, I will be giving some lectures on “Semi-definite extended formulations and sums of squares.”

I wanted to post here a draft of the lecture notes. These extend and complete the series of posts here on non-negative and psd rank and lifts of polytopes. They also incorporate many corrections, and have exercises of varying levels of difficulty. The bibliographic references are sparse at the moment because I am posting them from somewhere in the Adriatic (where wifi is also sparse).

# Entropy optimality: Analytic psd rank and John’s theorem

Recall that our goal is to sketch a proof of the following theorem, where the notation is from the last post. I will assume a knowledge of the three posts on polyhedral lifts and non-negative rank, as our argument will proceed by analogy.

Theorem 1 For every ${m \geq 1}$ and ${g : \{0,1\}^m \rightarrow \mathbb R_+}$, there exists a constant ${C(g)}$ such that the following holds. For every ${n \geq 2m}$,

$\displaystyle 1+n^{d/2} \geq \mathrm{rank}_{\mathsf{psd}}(M_n^g) \geq C \left(\frac{n}{\log n}\right)^{(d-1)/2}\,. \ \ \ \ \ (1)$

where ${d = \deg_{\mathsf{sos}}(g).}$

In this post, we will see how John’s theorem can be used to transform a psd factorization into one of a nicer analytic form. Using this, we will be able to construct a convex body that contains an approximation to every non-negative matrix of small psd rank.

1.1. Finite-dimensional operator norms

Let ${H}$ denote a finite-dimensional Euclidean space over ${\mathbb R}$ equipped with inner product ${\langle \cdot,\cdot\rangle}$ and norm ${|\cdot|}$. For a linear operator ${A : H \rightarrow H}$, we define the operator, trace, and Frobenius norms by

$\displaystyle \|A\| = \max_{x \neq 0} \frac{|Ax|}{x},\quad \|A\|_* = \mathrm{Tr}(\sqrt{A^T A}),\quad \|A\|_F = \sqrt{\mathrm{Tr}(A^T A)}\,.$

Let ${\mathcal M(H)}$ denote the set of self-adjoint linear operators on ${H}$. Note that for ${A \in \mathcal M(H)}$, the preceding three norms are precisely the ${\ell_{\infty}}$, ${\ell_1}$, and ${\ell_2}$ norms of the eigenvalues of ${A}$. For ${A,B \in \mathcal M(H)}$, we use ${A \succeq 0}$ to denote that ${A}$ is positive semi-definite and ${A \succeq B}$ for ${A-B \succeq 0}$. We use ${\mathcal D(H) \subseteq \mathcal M(H)}$ for the set of density operators: Those ${A \in \mathcal M(H)}$ with ${A \succeq 0}$ and ${\mathrm{Tr}(A)=1}$.

One should recall that ${\mathrm{Tr}(A^T B)}$ is an inner product on the space of linear operators, and we have the operator analogs of the Hölder inequalities: ${\mathrm{Tr}(A^T B) \leq \|A\| \cdot \|B\|_*}$ and ${\mathrm{Tr}(A^T B) \leq \|A\|_F \|B\|_F}$.

1.2. Rescaling the psd factorization

As in the case of non-negative rank, consider finite sets ${X}$ and ${Y}$ and a matrix ${M : X \times Y \rightarrow \mathbb R_+}$. For the purposes of proving a lower bound on the psd rank of some matrix, we would like to have a nice analytic description.

To that end, suppose we have a rank-${r}$ psd factorization

$\displaystyle M(x,y) = \mathrm{Tr}(A(x) B(y))$

where ${A : X \rightarrow \mathcal S_+^r}$ and ${B : Y \rightarrow \mathcal S_+^r}$. The following result of Briët, Dadush and Pokutta (2013) gives us a way to “scale” the factorization so that it becomes nicer analytically. (The improved bound stated here is from an article of Fawzi, Gouveia, Parrilo, Robinson, and Thomas, and we follow their proof.)

Lemma 2 Every ${M}$ with ${\mathrm{rank}_{\mathsf{psd}}(M) \leq r}$ admits a factorization ${M(x,y)=\mathrm{Tr}(P(x) Q(y))}$ where ${P : X \rightarrow \mathcal S_+^r}$ and ${Q : Y \rightarrow \mathcal S_+^r}$ and, moreover,

$\displaystyle \max \{ \|P(x)\| \cdot \|Q(y)\| : x \in X, y \in Y \} \leq r \|M\|_{\infty}\,,$

where ${\|M\|_{\infty} = \max_{x \in X, y \in Y} M(x,y)}$.

Proof: Start with a rank-${r}$ psd factorization ${M(x,y) = \mathrm{Tr}(A(x) B(y))}$. Observe that there is a degree of freedom here, because for any invertible operator ${J}$, we get another psd factorization ${M(x,y) = \mathrm{Tr}\left(\left(J A(x) J^T\right) \cdot \left((J^{-1})^T B(y) J^{-1}\right)\right)}$.

Let ${U = \{ u \in \mathbb R^r : \exists x \in X\,\, A(x) \succeq uu^T \}}$ and ${V = \{ v \in \mathbb R^r : \exists y \in X\,\, B(y) \succeq vv^T \}}$. Set ${\Delta = \|M\|_{\infty}}$. We may assume that ${U}$ and ${V}$ both span ${\mathbb R^r}$ (else we can obtain a lower-rank psd factorization). Both sets are bounded by finiteness of ${X}$ and ${Y}$.

Let ${C=\mathrm{conv}(U)}$ and note that ${C}$ is centrally symmetric and contains the origin. Now John’s theorem tells us there exists a linear operator ${J : \mathbb R^r \rightarrow \mathbb R^r}$ such that

$\displaystyle B_{\ell_2} \subseteq J C \subseteq \sqrt{r} B_{\ell_2}\,, \ \ \ \ \ (2)$

where ${B_{\ell_2}}$ denotes the unit ball in the Euclidean norm. Let us now set ${P(x) = J A(x) J^T}$ and ${Q(y) = (J^{-1})^T B(y) J^{-1}}$.

Eigenvalues of ${P(x)}$: Let ${w}$ be an eigenvector of ${P(x)}$ normalized so the corresponding eigenvalue is ${\|w\|_2^2}$. Then ${P(x) \succeq w w^T}$, implying that ${J^{-1} w \in U}$ (here we use that ${A \succeq 0 \implies S A S^T \succeq 0}$ for any ${S}$). Since ${w = J(J^{-1} w)}$, (2) implies that ${\|w\|_2 \leq \sqrt{r}}$. We conclude that every eigenvalue of ${P(x)}$ is at most ${r}$.

Eigenvalues of ${Q(y)}$: Let ${w}$ be an eigenvector of ${Q(y)}$ normalized so that the corresponding eigenvalue is ${\|w\|_2^2}$. Then as before, we have ${Q(y) \succeq ww^T}$ and this implies ${J^T w \in V}$. Now, on the one hand we have

$\displaystyle \max_{z \in JC}\, \langle z,w\rangle \geq \|w\|_2 \ \ \ \ \ (3)$

since ${JC \supseteq B_{\ell_2}}$.

On the other hand:

$\displaystyle \max_{z \in JC}\, \langle z,w\rangle^2 = \max_{z \in C}\, \langle Jz, w\rangle^2 = \max_{z \in C}\, \langle z, J^T w\rangle^2\,. \ \ \ \ \ (4)$

Finally, observe that for any ${u \in U}$ and ${v \in V}$, we have

$\displaystyle \langle u,v\rangle^2 =\langle uu^T, vv^T\rangle \leq \max_{x \in X, y \in Y} \langle A(x), B(y)\rangle \leq \Delta\,.$

By convexity, this implies that ${\max_{z \in C}\, \langle z,v\rangle^2 \leq \Delta}$ for all ${v \in V}$, bounding the right-hand side of (4) by ${\Delta}$. Combining this with (3) yields ${\|w\|_2^2 \leq \Delta}$. We conclude that all the eigenvalues of ${Q(y)}$ are at most ${\Delta}$. $\Box$

1.3. Convex proxy for psd rank

Again, in analogy with the non-negative rank setting, we can define an “analytic psd rank” parameter for matrices ${N : X \times Y \rightarrow \mathbb R_+}$:

$\displaystyle \alpha_{\mathsf{psd}}(N) = \min \Big\{ \alpha \mid \exists A : X \rightarrow \mathcal S_+^k, B : Y \rightarrow \mathcal S_+^k\,,$

$\displaystyle \hphantom{xx} \mathop{\mathbb E}_{x \in X}[A(x)]=I,$

$\displaystyle \hphantom{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx} \|B(y)\| \leq \frac{\alpha}{k}\, \mathop{\mathbb E}_{y \in Y}[\mathrm{Tr}(B(y))] \quad \forall y \in Y$

$\displaystyle \hphantom{\qquad\qquad} \|A(x)\| \leq \alpha \quad \forall x \in X\Big\}\,.$

Note that we have implicit equipped ${X}$ and ${Y}$ with the uniform measure. The main point here is that ${k}$ can be arbitrary. One can verify that ${\alpha_{\mathsf{psd}}}$ is convex.

And there is a corresponding approximation lemma. We use ${\|N\|_{\infty}=\max_{x,y} |N(x,y)|}$ and ${\|N\|_1 = \mathop{\mathbb E}_{x,y} |N(x,y)|}$.

Lemma 3 For every non-negative matrix ${M : X \times Y \rightarrow \mathbb R_+}$ and every ${\eta \in (0,1]}$, there is a matrix ${N}$ such that ${\|M-N\|_{\infty} \leq \eta \|M\|_{\infty}}$ and

$\displaystyle \alpha_{\mathsf{psd}}(N) \leq O(\mathrm{rank}_{\mathsf{psd}}(M)) \frac{1}{\eta} \frac{\|M\|_{\infty}}{\|M\|_1}\,.$

Using Lemma 2 in a straightforward way, it is not particularly difficult to construct the approximator ${N}$. The condition ${\mathop{\mathbb E}_x [A(x)] = I}$ poses a slight difficulty that requires adding a small multiple of the identity to the LHS of the factorization (to avoid a poor condition number), but this has a correspondingly small effect on the approximation quality. Putting “Alice” into “isotropic position” is not essential, but it makes the next part of the approach (quantum entropy optimization) somewhat simpler because one is always measuring relative entropy to the maximally mixed state.

# Entropy optimality: Quantum lifts of polytopes

In these previous posts, we explored whether the cut polytope can be expressed as the linear projection of a polytope with a small number of facets (i.e., whether it has a small linear programming extended formulation).

For many cut problems, semi-definite programs (SDPs) are able to achieve better approximation ratios than LPs. The most famous example is the Goemans-Williamson ${0.878}$-approximation for MAX-CUT. The techniques of the previous posts (see the full paper for details) are able to show that no polynomial-size LP can achieve better than factor ${1/2}$.

1.1. Spectrahedral lifts

The feasible regions of LPs are polyhedra. Up to linear isomorphism, every polyhedron ${P}$ can be represented as ${P = \mathbb R_+^n \cap V}$ where ${\mathbb R_+^n}$ is the positive orthant and ${V \subseteq \mathbb R^n}$ is an affine subspace.

In this context, it makes sense to study any cones that can be optimized over efficiently. A prominent example is the positive semi-definite cone. Let us define ${\mathcal S_+^n \subseteq \mathbb R^{n^2}}$ as the set of ${n \times n}$ real, symmetric matrices with non-negative eigenvalues. A spectrahedron is the intersection ${\mathcal S_+^n \cap V}$ with an affine subspace ${V}$. The value ${n}$ is referred to as the dimension of the spectrahedron.

In analogy with the ${\gamma}$ parameter we defined for polyhedral lifts, let us define ${\bar \gamma_{\mathsf{sdp}}(P)}$ for a polytope ${P}$ to be the minimal dimension of a spectrahedron that linearly projects to ${P}$. It is an exercise to show that ${\bar \gamma_{\mathsf{sdp}}(P) \leq \bar \gamma(P)}$ for every polytope ${P}$. In other words, spectahedral lifts are at least as powerful as polyhedral lifts in this model.

In fact, they are strictly more powerful. Certainly there are many examples of this in the setting of approximation (like the Goemans-Williamson SDP mentioned earlier), but there are also recent gaps between ${\bar \gamma}$ and ${\bar \gamma_{\mathrm{sdp}}}$ for polytopes; see the work of Fawzi, Saunderson, and Parrilo.

Nevertheless, we are recently capable of proving strong lower bounds on the dimension of such lifts. Let us consider the cut polytope ${\mathrm{CUT}_n}$ as in previous posts.

Theorem 1 (L-Raghavendra-Steurer 2015) There is a constant ${c > 0}$ such that for every ${n \geq 1}$, one has ${\bar \gamma_{\mathsf{sdp}}(\mathrm{CUT}_n) \geq e^{c n^{2/13}}}$.

Our goal in this post and the next is to explain the proof of this theorem and how quantum entropy maximization plays a key role.

1.2. PSD rank and factorizations

Just as in the setting of polyhedra, there is a notion of “factorization through a cone” that characterizes the parameter ${\bar \gamma_{\mathsf{sdp}}(P)}$. Let ${M \in \mathbb R^{m \times n}_+}$ be a non-negative matrix. One defines the psd rank of ${M}$ as the quantity

$\displaystyle \mathrm{rank}_{\mathsf{psd}}(M) = \min \left\{ r : M_{ij} = \mathrm{Tr}(A_i B_j) \textrm{ for some } A_1, \ldots, A_m, B_1, \ldots, B_n \in \mathcal S_+^r\right\}\,.$

The following theorem was independently proved by Fiorini, Massar, Pokutta, Tiwary, and de Wolf and Gouveia, Parrilo, and Thomas. The proof is a direct analog of Yannakakis’ proof for non-negative rank.

Theorem 2 For every polytope ${P}$, it holds that ${\bar \gamma_{\mathsf{sdp}}(P) = \mathrm{rank}_{\mathsf{psd}}(M)}$ for any slack matrix ${M}$ of ${P}$.

Recall the class ${\mathrm{QML}_n^+}$ of non-negative quadratic multi-linear functions that are positive on ${\{0,1\}^n}$ and the matrix ${\mathcal M_n : \mathrm{QML}_n^+ \times \{0,1\}^n \rightarrow \mathbb R_+}$ given by

$\displaystyle \mathcal M_n(f,x) = f(x)\,.$

We saw previously that ${\mathcal M_n}$ is a submatrix of some slack matrix of ${\mathrm{CUT}_n}$. Thus our goal is to prove a lower bound on ${\mathrm{rank}_{\mathsf{psd}}(\mathcal M_n)}$.

1.3. Sum-of-squares certificates

Just as in the setting of non-negative matrix factorization, we can think of a low psd rank factorization of ${\mathcal M_n}$ as a small set of “axioms” that can prove the non-negativity of every function in ${\mathrm{QML}_n^+}$. But now our proof system is considerably more powerful.

For a subspace of functions ${\mathcal U \subseteq L^2(\{0,1\}^n)}$, let us define the cone

$\displaystyle \mathsf{sos}(\mathcal U) = \mathrm{cone}\left(q^2 : q \in \mathcal U\right)\,.$

This is the cone of squares of functions in ${\mathcal U}$. We will think of ${\mathcal U}$ as a set of axioms of size ${\mathrm{dim}(\mathcal U)}$ that is able to assert non-negativity of every ${f \in \mathrm{sos}(\mathcal U)}$ by writing

$\displaystyle f = \sum_{i=1}^k q_i^2$

for some ${q_1, \ldots, q_k \in \mathsf{sos}(\mathcal U)}$.

Fix a subspace ${\mathcal U}$ and let ${r = \dim(\mathcal U)}$. Fix also a basis ${q_1, \ldots, q_r : \{0,1\}^n \rightarrow \mathbb R}$ for ${\mathcal U}$.

Define ${B : \{0,1\}^n \rightarrow \mathcal S_+^r}$ by setting ${B(x)_{ij} = q_i(x) q_j(x)}$. Note that ${B(x)}$ is PSD for every ${x}$ because ${B(x) = \vec q(x) \vec q(x)^T}$ where ${\vec q(x)=(q_1(x), \ldots, q_r(x))}$.

We can write every ${p \in \mathcal U}$ as ${p = \sum_{i=1}^r \lambda_i q_i}$. Defining ${A(p^2) \in \mathcal S_+^r}$ by ${A(p^2)_{ij} = \lambda_i \lambda_j}$, we see that

$\displaystyle \mathrm{Tr}(A(p^2) Q(x)) = \sum_{i,j} \lambda_i \lambda_j q_i(x) q_j(x) = p(x)^2\,.$

Now every ${f \in \mathsf{sos}(\mathcal U)}$ can be written as ${\sum_{i=1}^k c_i p_i^2}$ for some ${k \geq 0}$ and ${\{c_i \geq 0\}}$. Therefore if we define ${A(f) = \sum_{i=1}^k c_i \Lambda(p_i^2)}$ (which is a positive sum of PSD matrices), we arrive at the representation

$\displaystyle f(x) = \mathrm{Tr}(A(f) B(x))\,.$

In conclusion, if ${\mathrm{QML}_+^n \subseteq \mathsf{sos}(\mathcal U)}$, then ${\mathrm{rank}_{\mathsf{psd}}(\mathcal M_n) \leq \dim(\mathsf{sos}(\mathcal U))}$.

By a “purification” argument, one can also conclude ${\dim(\mathsf{sos}(\mathcal U)) \leq \mathrm{rank}_{\mathsf{psd}}(\mathcal M_n)^2}$.

1.4. The canonical axioms

And just as ${d}$-juntas were the canonical axioms for our NMF proof system, there is a similar canonical family in the SDP setting: Let ${\mathcal Q_d}$ be the subspace of all degree-${d}$ multi-linear polynomials on ${\mathbb R^n}$. We have

$\displaystyle \dim(\mathcal Q_d) \leq \sum_{k=0}^d {n \choose k} \leq 1+n^d\,. \ \ \ \ \ (1)$

For a function ${f : \{0,1\}^n \rightarrow \mathbb R_+}$, one defines

$\displaystyle \deg_{\mathsf{sos}}(f) = \min \{d : f \in \mathsf{sos}(\mathcal Q_{d}) \}\,.$

(One could debate whether the definition of sum-of-squares degree should have ${d/2}$ or ${d}$. The most convincing arguments suggest that we should use membership in the cone of squares over ${\mathcal Q_{\lfloor d/2\rfloor}}$ so that the sos-degree will be at least the real-degree of the function.)

On the other hand, our choice has the following nice property.

Lemma 3 For every ${f : \{0,1\}^n \rightarrow \mathbb R_+}$, we have ${\deg_{\mathrm{sos}}(f) \leq \deg_J(f)}$.

Proof: If ${q}$ is a non-negative ${d}$-junta, then ${\sqrt{q}}$ is also a non-negative ${d}$-junta. It is elementary to see that every ${d}$-junta is polynomial of degree at most ${d}$, thus ${q}$ is the square of a polynomial of degree at most ${d}$. $\Box$

1.5. The canonical tests

As with junta-degree, there is a simple characterization of sos-degree in terms of separating functionals. Say that a functional ${\varphi : \{0,1\}^n \rightarrow \mathbb R}$ is degree-${d}$ pseudo-positive if

$\displaystyle \langle \varphi, q^2 \rangle = \mathop{\mathbb E}_{x \in \{0,1\}^n} \varphi(x) q(x)^2 \geq 0$

whenever ${q : \{0,1\}^n \rightarrow \mathbb R}$ satisfies ${\deg(q) \leq d}$ (and by ${\deg}$ here, we mean degree as a multi-linear polynomial on ${\{0,1\}^n}$).

Again, since ${\mathsf{sos}(\mathcal Q_d)}$ is a closed convex set, there is precisely one way to show non-membership there. The following characterization is elementary.

Lemma 4 For every ${f : \{0,1\}^n \rightarrow \mathbb R_+}$, it holds that ${\deg_{\mathsf{sos}}(f) > d}$ if and only if there is a degree-${d}$ pseudo-positive functional ${\varphi : \{0,1\}^n \rightarrow \mathbb R}$ such that ${\langle \varphi,f\rangle < 0}$.

1.6. The connection to psd rank

Following the analogy with non-negative rank, we have two objectives left: (1) to exhibit a function ${f \in \mathrm{QML}_n^+}$ with ${\deg_{\mathsf{sos}}(f)}$ large, and (ii) to give a connection between the sum-of-squares degree of ${f}$ and the psd rank of an associated matrix.

Notice that the function ${f(x)=(1-\sum_{i=1}^n x_i)^2}$ we used for junta-degree has ${\deg_{\mathsf{sos}}(f)=1}$, making it a poor candidate. Fortunately, Grigoriev has shown that the knapsack polynomial has large sos-degree.

Theorem 5 For every odd ${m \geq 1}$, the function

$\displaystyle f(x) = \left(\frac{m}{2} - \sum_{i=1}^n x_i\right)^2 - \frac14$

has ${\deg_{\mathsf{sos}}(f) \geq \lceil m/2\rceil}$.

Observe that this ${f}$ is non-negative over ${\{0,1\}^m}$ (because ${m}$ is odd), but it is manifestly not non-negative on ${\mathbb R^m}$.

Finally, we recall the submatrices of ${\mathcal M_n}$ defined as follows. Fix some integer ${m \geq 1}$ and a function ${g : \{0,1\}^m \rightarrow \mathbb R_+}$. Then ${M_n^g : {[n] \choose m} \times \{0,1\}^n \rightarrow \mathbb R_+}$ is given by

$\displaystyle M_n^g(S,x) = g(x|_S)\,.$

In the next post, we discuss the proof of the following theorem.

Theorem 6 (L-Raghavendra-Steurer 2015) For every ${m \geq 1}$ and ${g : \{0,1\}^m \rightarrow \mathbb R_+}$, there exists a constant ${C(g)}$ such that the following holds. For every ${n \geq 2m}$,

$\displaystyle 1+n^{d} \geq \mathrm{rank}_{\mathsf{psd}}(M_n^g) \geq C(g) \left(\frac{n}{\log n}\right)^{(d-1)/2}\,,$

where $d=\deg_{\mathsf{sos}}(g)$.

Note that the upper bound is from (1) and the non-trivial content is contained in the lower bound. As before, in conjunction with Theorem 5, this shows that $\mathrm{rank}_{\mathsf{psd}}(\mathcal M_n)$ cannot be bounded by any polynomial in $n$ and thus the same holds for $\bar \gamma_{\mathsf{sdp}}(\mathrm{CUT}_n)$.

# Entropy optimality: Non-negative rank lower bounds

Using the notation from the last post, our goal is now to prove the following theorem.

Theorem 1 For every ${m \geq 1}$ and ${g : \{0,1\}^m \rightarrow \mathbb R_+}$, there is a constant ${C=C(g)}$ such that for all ${n \geq 2m}$,

$\displaystyle \mathrm{rank}_+(M_n^g) \geq C \left(\frac{n}{\log n}\right)^{\mathrm{deg}_J(g)-1}\,.$

1.1. Convex relaxations of non-negative rank

Before getting to the proof, let us discuss the situation in somewhat more generality. Consider finite sets ${X}$ and ${Y}$ and a matrix ${M : X \times Y \rightarrow \mathbb R_+}$ with ${r=\mathrm{rank}_+(M)}$.

In order to use entropy-maximization, we would like to define a convex set of low non-negative rank factorizations (so that maximizing entropy over this set will give us a “simple” factorization). But the convex hull of ${\{ N \in \mathbb R_+^{X \times Y} : \mathrm{rank}_+(N) = 1 \}}$ is precisely the set of all non-negative matrices.

Instead, let us proceed analytically. For simplicity, let us equip both ${X}$ and ${Y}$ with the uniform measure. Let ${\mathcal Q = \{ b : Y \rightarrow \mathbb R_+ \mid \|b\|_1 = 1\}}$ denote the set of probability densities on ${Y}$. Now define

$\displaystyle \alpha_+(N) = \min \Big\{ \alpha : \exists A \in \mathbb R_+^{X \times k}, B \in \mathbb R_+^{k \times Y} \textrm{ with } N=AB, \textrm{ and}$

$\displaystyle \qquad\qquad\qquad\qquad\qquad\{B_1, \ldots, B_k\} \subseteq \mathcal Q, \textrm{ and }$

$\displaystyle \hphantom{\{B_1, \ldots, B_k\} \subseteq \mathcal Q,} \max_{i \in [k]} \|B_i\|_{\infty} \leq \alpha, \sum_{i=1}^k \|A^{(i)}\|_{\infty} \leq \alpha\Big\}.$

Here ${\{A^{(i)}\}}$ are the columns of ${A}$ and ${\{B_i\}}$ are the rows of ${B}$. Note that now ${k}$ is unconstrained.

Observe that ${\alpha_+}$ is a convex function. To see this, given a pair ${N=AB}$ and ${N'=A'B'}$, write

$\displaystyle \frac{N+N'}{2} = \left(\begin{array}{cc} \frac12 A & \frac12 A' \end{array}\right) \left(\begin{array}{l} \vphantom{\bigoplus} B \\ \vphantom{\bigoplus} B' \end{array}\right)\,,$

witnessing the fact that ${\alpha_+(\frac12(N+N')) \leq \frac12 \alpha_+(N) + \frac12 \alpha_+(N')}$.

1.2. A truncation argument

So the set ${\{ N : \alpha_+(N) \leq c \}}$ is convex, but it’s not yet clear how this relates to ${\mathrm{rank}_+}$. We will see now that low non-negative rank matrices are close to matrices with ${\alpha_+}$ small. In standard communication complexity/discrepancy arguments, this corresponds to discarding “small rectangles.”

In the following lemma, we use the norms ${\|M\|_1 = \mathbb E_{x,y} |M_{x,y}|}$ and ${\|M\|_{\infty} = \max_{x,y} |M_{x,y}|}$.

Lemma 2 For every non-negative ${M \in \mathbb R_+^{X \times Y}}$ with ${\mathrm{rank}_+(M) \leq r}$ and every ${\delta \in (0,1)}$, there is a matrix ${\tilde M \in \mathbb R_+^{X \times Y}}$ such that

$\displaystyle \|M-\tilde M\|_1 \leq \delta$

and

$\displaystyle \alpha_+(\tilde M) \leq \frac{r}{\delta} \|M\|_{\infty}\,.$

Proof: Suppose that ${M=AB}$ with ${A \in \mathbb R_+^{X \times r}, B \in \mathbb R_+^{r \times Y}}$, and let us interpret this factorization in the form

$\displaystyle M(x,y) = \sum_{i=1}^r A_i(x) B_i(y) \ \ \ \ \ (1)$

(where ${\{A_i\}}$ are the columns of ${A}$ and ${\{B_i\}}$ are the rows of ${B}$). By rescaling the columns of ${A}$ and the rows of ${B}$, respectively, we may assume that ${\mathop{\mathbb E}_{\mu}[B_i]=1}$ for every ${i \in [r]}$.

Let ${\Lambda = \{ i : \|B_i\|_{\infty} > \tau \}}$ denote the “bad set” of indices (we will choose ${\tau}$ momentarily). Observe that if ${i \in \Lambda}$, then

$\displaystyle \|A_i\|_{\infty} \leq \frac{\|M\|_{\infty}}{\tau}\,,$

from the representation (1) and the fact that all summands are positive.

Define the matrix ${\tilde M(x,y) = \sum_{i \notin \Lambda} A_i(x) B_i(y)}$. It follows that

$\displaystyle \|M-\tilde M\|_1 = \mathop{\mathbb E}_{x,y} \left[|M(x,y)-\tilde M(x,y)|\right] = \sum_{i \in \Lambda} \mathop{\mathbb E}_{x,y} [A_i(x) B_i(y)]\,.$

Each of the latter terms is at most ${\|A_i\|_{\infty} \|B_i\|_1 \leq \frac{\|M\|_{\infty}}{\tau}}$ and ${|\Lambda| \leq r}$, thus

$\displaystyle \|M-\tilde M\|_1 \leq r \frac{\|M\|_{\infty}}{\tau}\,.$

Next, observe that

$\displaystyle \mathop{\mathbb E}_y [M(x,y)] = \sum_{i=1}^r A_i(x) \|B_i\|_1 = \sum_{i=1}^r A_i(x)\,,$

implying that ${\|A_i\|_{\infty} \leq \|M\|_{\infty}}$ and thus ${\sum_{i=1}^r \|A_i\|_{\infty} \leq r \|M\|_{\infty}}$.

Setting ${\tau = r \|M\|_{\infty}/\delta}$ yields the statement of the lemma. $\Box$

Generally, the ratio ${\frac{\|M\|_{\infty}}{\|M\|_1}}$ will be small compared to ${r}$ (e.g., polynomial in ${n}$ vs. super-polynomial in ${n}$). Thus from now on, we will actually prove a lower bound on ${\alpha_+(M)}$. One has to verify that the proof is robust enough to allow for the level of error inherent in Lemma 2.

1.3. The test functionals

Now we have a convex body of low “analytic non-negative rank” matrices. Returning to Theorem 1 and the matrix ${M_n^g}$, we will now assume that ${\alpha_+(M_n^g) \leq \alpha}$. Next we identify the proper family of test functionals that highlight the difficulty of factoring the matrix ${M_n^g}$. We will consider the uniform measures on ${{[n] \choose m}}$ and ${\{0,1\}^n}$. We use ${\mathop{\mathbb E}_S}$ and ${\mathop{\mathbb E}_x}$ to denote averaging with respect to these measures.

Let ${d=\deg_J(g)-1}$. From the last post, we know there exists a ${d}$-locally positive functional ${\varphi : \{0,1\}^n \rightarrow \mathbb R}$ such that ${\beta := \mathop{\mathbb E}_x \varphi(x) g(x) < 0}$, and ${\mathop{\mathbb E}_x \varphi(x) q(x) \geq 0}$ for every ${d}$-junta ${q}$.

For ${S \subseteq [n]}$ with ${|S|=m}$, let us denote ${\varphi_S(x) = \varphi(x|_S)}$. These functionals prove lower bounds on the junta-degree of ${g}$ restricted to various subsets of the coordinates. If we expect that junta-factorizations are the “best” of a given rank, then we have some confidence in choosing the family ${\{\varphi_S\}}$ as our test functions.

1.4. Entropy maximization

Use ${\alpha_+(M_n^g) \leq \alpha}$ to write

$\displaystyle M_n^g(S,x) = \sum_{i=1}^k A_i(S) B_i(x)\,,$

where ${A_i, B_i \geq 0}$ and we have ${\|B_i\|_1=1}$ and ${\|B_i\|_{\infty} \leq \alpha}$ for all ${i \in [k]}$, and finally ${\sum_{i=1}^k \|A_i\|_{\infty} \leq \alpha}$.

First, as we observed last time, if each ${B_i}$ were a ${d}$-junta, we would have a contradiction:

$\displaystyle \mathop{\mathbb E}_{S,x} \left[\varphi_S(x) M_n^g(S,x)\right] = \mathop{\mathbb E}_{y \in \{0,1\}^m} \varphi(y) g(y) = \beta < 0\,, \ \ \ \ \ (2)$

and yet

$\displaystyle \mathop{\mathbb E}_{S,x} \left[\varphi_S(x) \sum_{i=1}^k A_i(S) B_i(x)]\right] = \sum_{i=1}^k \mathop{\mathbb E}_S A_i(S) \mathop{\mathbb E}_x \left[\varphi_S(x) B_i(x)\right] \geq 0 \ \ \ \ \ (3)$

because ${\mathop{\mathbb E}_x \left[\varphi_S(x) B_i(x)\right] = \mathop{\mathbb E}_{y \in \{0,1\}^S} \varphi(y) \mathop{\mathbb E}_{x} \left[B_i(x) \,\big|\, x|_S =y\right] \geq 0}$ since ${\varphi}$ is ${d}$-locally positive and the function ${y \mapsto \mathop{\mathbb E}_{x} \left[B_i(x) \,\big|\, x|_S =y\right]}$ is a ${d}$-junta.

So now the key step: Use entropy maximization to approximate ${B_i}$ by a junta! In future posts, we will need to consider the entire package ${\{B_1, \ldots, B_k\}}$ of functions simultaneously. But for the present lower bound, it suffices to consider each ${B_i}$ separately.

Consider the following optimization over variables ${\{\tilde B_i(x) : x \in \{0,1\}^n\}}$:

$\displaystyle \mathrm{minimize }\qquad \mathrm{Ent}(\tilde B_i) = \mathbb E [\tilde B_i \log \tilde B_i]$

$\displaystyle \qquad\textrm{ subject to } \qquad \qquad\mathbb E[\tilde B_i]=1, \quad \tilde B_i(x) \geq 0 \quad \forall x \in \{0,1\}^n$

$\displaystyle \hphantom{\bigoplus} \qquad \qquad \qquad \qquad \mathop{\mathbb E}_x \left[\varphi_S(x) \tilde B_i(x)\right] \leq \mathop{\mathbb E}_x \left[\varphi_S(x) B_i(x)\right] + \varepsilon \qquad \forall |S|=m\,. \ \ \ \ \ (4)$

The next claim follows immediately from Theorem 1 in this post (solving the max-entropy optimization by sub-gradient descent).

Claim 1 There exists a function ${\tilde B_i : \{0,1\}^n \rightarrow \mathbb R_+^n}$ satisfying all the preceding constraints and of the form

$\displaystyle \tilde B_i = \frac{\exp\left(\sum_{j=1}^{M} \lambda_j \varphi_{S_j}\right)}{\mathop{\mathbb E} \exp\left(\sum_{i=1}^{M} \lambda_j \varphi_{S_j}\right)}$

such that

$\displaystyle M \leq C(g) \frac{\log \alpha}{\varepsilon^2}\,,$

where ${C(g)}$ is some constant depending only on ${g}$.

Note that ${\varphi}$ depends only on ${g}$, and thus ${\|\varphi_S\|_{\infty}}$ only depends on ${g}$ as well. Now each ${\varphi_{S_j}}$ only depends on ${m}$ variables (those in ${S_j}$ and ${|S_j|=m}$), meaning that our approximator ${\tilde B_i}$ is an ${h}$-junta for

$\displaystyle h \leq m \cdot C(g) \frac{\log \alpha}{\varepsilon^2}\,. \ \ \ \ \ (5)$

Oops. That doesn’t seem very good. The calculation in (3) needs that ${\tilde B_i}$ is a ${d}$-junta, and certainly ${d < m}$ (since ${g}$ is a function on ${\{0,1\}^m}$). Nevertheless, note that the approximator is a non-trivial junta. For instance, if ${\alpha \leq 2^{\sqrt{n}}}$, then it is an $O(\sqrt{n})$-junta, recalling that $m$ is a constant (depending on ${g}$).

1.5. Random restriction saves the day

Let’s try to apply the logic of (3) to the ${\tilde B_i}$ approximators anyway. Fix some ${i \in [k]}$ and let ${J_i}$ be the set of coordinates on which ${\tilde B_i}$ depends. Then:

$\displaystyle \mathop{\mathbb E}_{S,x} \left[\varphi_S(x) A_i(S) \tilde B_i(x)\right] = \mathop{\mathbb E}_S\,A_i(S) \mathop{\mathbb E}_{y \in \{0,1\}^S} \varphi(y) \mathop{\mathbb E}_{x \in \{0,1\}^n} \left[B_i(x) \,\big|\, x|_S=y\right]$

Note that the map ${y \mapsto \mathop{\mathbb E}_{x \in \{0,1\}^n} \left[B_i(x) \,\big|\, x|_S=y\right]}$ is a junta on ${J_i \cap S}$. Thus if ${|J_i \cap S| \leq d}$, then the contribution from this term is non-negative since ${\varphi}$ is ${d}$-locally positive. But ${|S|=m}$ is fixed and ${n}$ is growing, thus ${|J_i \cap S| > d}$ is quite rare! Formally,

$\displaystyle \mathop{\mathbb E}_{S,x} \left[\varphi_S(x) A_i(S) \tilde B_i(x)\right] \geq - \|A_i\|_{\infty}\, \mathbb P_S[|J_i \cap S| > d] \geq - \|A_i\|_{\infty} \frac{h^d (2m)^d}{n^d} \,.$

In the last estimate, we have used a simple union bound and ${n \geq 2m}$.

Putting everything together and summing over ${i \in [k]}$, we conclude that

$\displaystyle \sum_{i=1}^k \mathop{\mathbb E}_{S,x} \left[\varphi_S(x) A_i(S) \tilde B_i(x)\right] \geq -\alpha \frac{h^d (2m)^d}{n^d}\,.$

Note that by choosing ${n}$ only moderately large, we will make this error term very small.

Moreover, since each ${\tilde B_i}$ is a feasible point of the optimization (4), we have

$\displaystyle \sum_{i=1}^k \mathop{\mathbb E}_{S,x} \left[\varphi_S(x) A_i(S) B_i(x)\right] = \sum_{i=1}^k \mathop{\mathbb E}_S A_i(S) \mathop{\mathbb E}_x \left[\varphi_S(x) B_i(x)\right]$

$\displaystyle \hphantom{\sum_{i=1}^k \mathop{\mathbb E}_{S,x} \left[\varphi_S(x) A_i(S) B_i(x)\right]} \geq \sum_{i=1}^k \mathop{\mathbb E}_S A_i(S) \left(-\varepsilon + \mathop{\mathbb E}_x \left[\varphi_S(x) \tilde B_i(x)\right]\right)$

$\displaystyle \hphantom{\sum_{i=1}^k \mathop{\mathbb E}_{S,x} \left[\varphi_S(x) A_i(S) B_i(x)\right]} \geq -\varepsilon \sum_{i=1}^k \|A_i\|_1 - \alpha \frac{h^d (2m)^d}{n^d}\,.$

Almost there: Now observe that

$\displaystyle \|g\|_1 = \mathop{\mathbb E}_{S,x} [M_n^g(S,x)] = \sum_{i=1}^k \|A_i\|_1 \|B_i\|_1 = \sum_{i=1}^k \|A_i\|_1\,.$

Plugging this into the preceding line yields

$\displaystyle \sum_{i=1}^k \mathop{\mathbb E}_{S,x} \left[\varphi_S(x) A_i(S) B_i(x)\right] \geq -\varepsilon \|g\|_1 - \alpha \frac{h^d (2m)^d}{n^d}\,.$

Recalling now (2), we have derived a contradiction to ${\alpha_+(M) \leq \alpha}$ if we can choose the right-hand side to be bigger than ${\beta}$ (which is a negative constant depending only on ${g}$). Setting ${\varepsilon = -\beta/(2 \|g\|_1)}$, we consult (5) to see that

$\displaystyle h \leq C'(g) m \log \alpha$

for some other constant ${C'(g)}$ depending only on ${g}$. We thus arrive at a contradiction if ${\alpha = o((n/\log n)^d)}$, recalling that ${m, d}$ depend only on ${g}$. This completes the argument.

# Entropy optimality: Lifts of polytopes

Recall from the previous post that ${\mathrm{CUT}_n}$ denotes the cut polytope on the ${n}$-vertex complete graph, and ${\bar \gamma(\mathrm{CUT}_n)}$ is the smallest number of facets in any polytope that linearly projects to ${\mathrm{CUT}_n}$. Our goal is to prove a lower bound on this quantity, but first we should mention that a nearly tight lower bound is known.

Theorem 1 (Fiorini, Massar, Pokutta, Tiwari, de Wolf 2012) There is a constant ${c > 0}$ such that for every ${n \geq 1}$, ${\bar \gamma(\mathrm{CUT}_n) \geq e^{cn}}$.

We will present a somewhat weaker lower bound using entropy maximization, following our joint works with Chan, Raghavendra, and Steurer (2013) and with Raghavendra and Steurer (2015). This method is only currently capable of proving that ${\bar \gamma(\mathrm{CUT}_n) \geq e^{cn^{1/3}}}$, but it has the advantage of being generalizable—it extends well to the setting of approximate lifts and spectrahedral lifts (we’ll come to the latter in a few posts).

1.1. The entropy maximization framework

To use entropy optimality, we proceed as follows. For the sake of contradiction, we assume that ${\bar \gamma(\mathrm{CUT}_n)}$ is small.

First, we will identify the space of potential lifts of small ${\gamma}$-value with the elements of a convex set of probability measures. (This is where the connection to non-negative matrix factorization (NMF) will come into play.) Then we will choose a family of “tests” intended to capture the difficult aspects of being a valid lift of ${\mathrm{CUT}_n}$. This step is important as having more freedom (corresponding to weaker tests) will allow the entropy maximization to do more “simplification.” The idea is that the family of tests should still be sufficiently powerful to prove a lower bound on the entropy-optimal hypothesis.

Finally, we will maximize the entropy of our lift over all elements of our convex set, subject to performing well on the tests. Our hope is that the resulting lift is simple enough that we can prove it couldn’t possibly pass all the tests, leading to a contradiction.

In order to find the right set of tests, we will identify a family of canonical (approximate) lifts. This is family ${\{P_k\}}$ of polytopes so that ${\gamma(P_k) \leq O(n^k)}$ and which we expect to give the “best approximation” to ${\mathrm{CUT}_n}$ among all lifts with similar ${\gamma}$-value. We can identify this family precisely because these will be lifts that obey the natural symmetries of the cut polytope (observe that the symmetric group ${S_n}$ acts on ${\mathrm{CUT}_n}$ in the natural way).

1.2. NMF and positivity certificates

Recall the matrix ${\mathcal M_n : \mathrm{QML}^+_n \times \{0,1\}^n \rightarrow \mathbb R_+}$ given by ${\mathcal M_n(f,x)=f(x)}$, where ${\mathrm{QML}^+_n}$ is the set of all quadratic multi-linear functions that are non-negative on ${\{0,1\}^n}$. In the previous post, we argued that ${\bar \gamma(\mathrm{CUT}_{n+1}) \geq \mathrm{rank}_+(\mathcal M_n)}$.

If ${r=\mathrm{rank}_+(\mathcal M_n)}$, it means we can write

$\displaystyle f(x) = \mathcal M_n(f,x) = \sum_{i=1}^r A_i(f) B_i(x) \ \ \ \ \ (1)$

for some functions ${A_i : \mathrm{QML}^+_n \rightarrow \mathbb R_+}$ and ${B_i : \{0,1\}^n \rightarrow \mathbb R_+}$. (Here we are using a factorization ${\mathcal M_n = AB}$ where ${A_{f,i} = A_i(f)}$ and ${B_{x,i}=B_i(x)}$.)

Thus the low-rank factorization gives us a “proof system” for ${\mathrm{QML}^+_n}$. Every ${f \in \mathrm{QML}^+_n}$ can be written as a conic combination of the functions ${B_1, B_2, \ldots, B_r}$, thereby certifying its positivity (since the ${B_i}$‘s are positive functions).

If we think about natural families ${\mathcal B = \{B_i\}}$ of “axioms,” then since ${\mathrm{QML}^+_n}$ is invariant under the natural action of ${S_n}$, we might expect that our family ${\mathcal B}$ should share this invariance. Once we entertain this expectation, there are natural small families of axioms to consider: The space of non-negative ${k}$-juntas for some ${k \ll n}$.

A ${k}$-junta ${b : \{0,1\}^n \rightarrow \mathbb R}$ is a function whose value only depends on ${k}$ of its input coordinates. For a subset ${S \subseteq \{1,\ldots,n\}}$ with ${|S|=k}$ and an element ${z \in \{0,1\}^k}$, let ${q_{S,z} : \{0,1\}^n \rightarrow \{0,1\}}$ denote the function given by ${q_{S,z}(x)=1}$ if and only if ${x|_S=z}$.

We let ${\mathcal J_k = \{ q_{S,z} : |S| \leq k, z \in \{0,1\}^{|S|} \}}$. Observe that ${|\mathcal J_k| \leq O(n^k)}$. Let us also define ${\mathrm{cone}(\mathcal J_k)}$ as the set of all conic combinations of functions in ${\mathcal J_k}$. It is easy to see that ${\mathrm{cone}(\mathcal J_k)}$ contains precisely the conic combinations of non-negative ${k}$-juntas.

If it were true that ${\mathrm{QML}^+_n \subseteq \mathcal J_k}$ for some ${k}$, we could immediately conclude that ${\mathrm{rank}_+(\mathcal M_n) \leq |\mathcal J_k| \leq O(n^k)}$ by writing ${\mathcal M_n}$ in the form (1) where now ${\{B_i\}}$ ranges over the elements of ${\mathcal J_k}$ and ${\{A_i(f)\}}$ gives the corresponding non-negative coefficients that follow from ${f \in \mathcal J_k}$.

1.3. No ${k}$-junta factorization for ${k \leq n/2}$

We will now see that juntas cannot provide a small set of axioms for ${\mathrm{QML}^+_n}$.

Theorem 2 Consider the function ${f : \{0,1\}^n \rightarrow \mathbb R_+}$ given by ${f(x) = \left(1-\sum_{i=1}^n x_i\right)^2}$. Then ${f \notin \mathcal J_{\lceil n/2\rceil}}$.

Toward the proof, let’s introduce a few definitions. First, for ${f : \{0,1\}^n \rightarrow \mathbb R_+}$, define the junta degree of ${f}$ to be

$\displaystyle \deg_J(f) = \min \{ k : f \in \mathrm{cone}(\mathcal J_k) \}\,.$

Since every ${f}$ is an ${n}$-junta, we have ${\deg_J(f) \leq n}$.

Now because ${\{ f : \deg_J(f) \leq k \}}$ is a cone, there is a universal way of proving that ${\deg_J(f) > k}$. Say that a functional ${\varphi : \{0,1\}^n \rightarrow \mathbb R}$ is ${k}$-locally positive if for all ${|S| \leq k}$ and ${z \in \{0,1\}^{|S|}}$, we have

$\displaystyle \sum_{x \in \{0,1\}^n} \varphi(x) q_{S,z}(x) \geq 0\,.$

These are precisely the linear functionals separating a function ${f}$ from ${\mathrm{cone}(\mathcal J_k)}$: We have ${\mathrm{deg}_J(f) > k}$ if and only if there is a ${k}$-locally positive functional ${\varphi}$ such that ${\sum_{x \in \{0,1\}^n} \varphi(x) f(x) < 0}$. Now we are ready to prove Theorem 2.

Proof: We will prove this using an appropriate ${k}$-locally positive functional. Define

$\displaystyle \varphi(x) = \begin{cases} -1 & |x| = 0 \\ \frac{2}{n} & |x|=1 \\ 0 & |x| > 1\,, \end{cases}$

where ${|x|}$ denotes the hamming weight of ${x \in \{0,1\}^n}$.

First, observe that

$\displaystyle \sum_{x \in \{0,1\}^n} \varphi(x) = -1 + n \cdot \frac{2}{n} = 1\,.$

Now recall the the function ${f}$ from the statement of the theorem and observe that by opening up the square, we have

$\displaystyle \sum_{x \in \{0,1\}^n} \varphi(x) f(x) = \sum_{x \in \{0,1\}^n} \varphi(x) \left(1-2 \sum_i x_i + \sum_i x_i^2 + 2\sum_{i \neq j} x_i x_j\right)\ \ \ \ \ (2)$

$\displaystyle = \sum_{x \in \{0,1\}^n} \varphi(x) \left(1-\sum_i x_i\right) = -1\,.$

Finally, consider some ${S \subseteq \{1,\ldots,n\}}$ with ${|S|=k \leq n/2}$ and ${z \in \{0,1\}^k}$. If ${z=\mathbf{0}}$, then

$\displaystyle \sum_{x \in \{0,1\}^n} \varphi(x) q_{S,z}(x) = -1 + \frac{2}{n} \left(n-k\right) \geq 0\,.$

If ${|z| > 1}$, then the sum is 0. If ${|z|=1}$, then the sum is non-negative because in that case ${q_{S,z}}$ is only supported on non-negative values of ${\varphi}$. We conclude that ${\varphi}$ is ${k}$-locally positive for ${k \leq n/2}$. Combined with (2), this yields the statement of the theorem. $\Box$

1.4. From juntas to general factorizations

So far we have seen that we cannot achieve a low non-negative rank factorization of ${\mathcal M_n}$ using ${k}$-juntas for ${k \leq n/2}$.

Brief aside: If one translates this back into the setting of lifts, it says that the ${k}$-round Sherali-Adams lift of the polytope

$\displaystyle P = \left\{ x \in [0,1]^{n^2} : x_{ij}=x_{ji},\, x_{ij} \leq x_{jk} + x_{ki} \quad \forall i,j,k \in \{1,\ldots,n\}\right\}$

does not capture ${\mathrm{CUT}_n}$ for ${k \leq n/2}$.

In the next post, we will use entropy maximization to show that a non-negative factorization of ${\mathcal M_n}$ would lead to a ${k}$-junta factorization with ${k}$ small (which we just saw is impossible).

For now, let us state the theorem we will prove. We first define a submatrix of ${\mathcal M_n}$. Fix some integer ${m \geq 1}$ and a function ${g : \{0,1\}^m \rightarrow \mathbb R_+}$. Now define the matrix ${M_n^g : {[n] \choose m} \times \{0,1\}^n \rightarrow \mathbb R_+}$ given by

$\displaystyle M_n^g(S,x) = g(x|_S)\,.$

The matrix is indexed by subsets ${S \subseteq [n]}$ with ${|S|=m}$ and elements ${x \in \{0,1\}^n}$. Here, ${x|_S}$ represents the (ordered) restriction of ${x}$ to the coordinates in ${S}$.

Theorem 3 (Chan-L-Raghavendra-Steurer 2013) For every ${m \geq 1}$ and ${g : \{0,1\}^m \rightarrow \mathbb R_+}$, there is a constant ${C=C(g)}$ such that for all ${n \geq 2m}$,

$\displaystyle \mathrm{rank}_+(M_n^g) \geq C \left(\frac{n}{\log n}\right)^{\mathrm{deg}_J(g)-1}\,.$

Note that if ${g \in \mathrm{QML}^+_m}$ then ${M_n^g}$ is a submatrix of ${\mathcal M_n}$. Since Theorem 2 furnishes a sequence of quadratic multi-linear functions ${\{g_j\}}$ with ${\mathrm{deg}_J(g_j) \rightarrow \infty}$, the preceding theorem tells us that ${\mathrm{rank}_+(\mathcal M_n)}$ cannot be bounded by any polynomial in ${n}$. A more technical version of the theorem is able to achieve a lower bound of ${e^{c n^{1/3}}}$ (see Section 7 here).

# Primer: Lifts of polytopes and non-negative matrix factorization

In preparation for the next post on entropy optimality, we need a little background on lifts of polytopes and non-negative matrix factorization.

1.1. Polytopes and inequalities

A ${d}$-dimensional convex polytope ${P \subseteq \mathbb R^d}$ is the convex hull of a finite set of points in ${\mathbb R^d}$. Equivalently, it is a compact set defined by a family of linear inequalities

$\displaystyle P = \{ x \in \mathbb R^d : Ax \leq b \}$

for some matrix ${A \in \mathbb R^{m \times d}}$.

Let us give a measure of complexity for ${P}$: Define ${\gamma(P)}$ to be the smallest number ${m}$ such that for some ${C \in \mathbb R^{s \times d}, y \in \mathbb R^s, A \in \mathbb R^{m \times d}, b \in \mathbb R^m}$, we have

$\displaystyle P = \{ x \in \mathbb R^d : Cx=y \textrm{ and } Ax \leq b\}\,.$

In other words, this is the minimum number of inequalities needed to describe ${P}$. If ${P}$ is full-dimensional, then this is precisely the number of facets of ${P}$ (a facet is a maximal proper face of ${P}$).

Thinking of ${\gamma(P)}$ as a measure of complexity makes sense from the point of view of optimization: Interior point methods can efficiently optimize linear functions over ${P}$ (to arbitrary accuracy) in time that is polynomial in ${\gamma(P)}$.

1.2. Lifts of polytopes

Many simple polytopes require a large number of inequalities to describe. For instance, the cross-polytope

$\displaystyle C_d = \{ x \in \mathbb R^d : \|x\|_1 \leq 1 \} = \{ x \in \mathbb R^d : \pm x_1 \pm x_2 \cdots \pm x_d \leq 1 \}$

has ${\gamma(C_d)=2^d}$. On the other hand, ${C_d}$ is the projection of the polytope

$\displaystyle Q_d = \left\{ (x,y) \in \mathbb R^{2d} : \sum_{i=1}^n y_i = 1, \,\, - y_i \leq x_i \leq y_i\,\forall i\right\}$

onto the ${x}$ coordinates, and manifestly, ${\gamma(Q_d) \leq 2d}$. Thus ${C_d}$ is the (linear) shadow of a much simpler polytope in a higher dimension.

[image credit: Fiorini, Rothvoss, and Tiwary]

A polytope ${Q}$ is called a lift of the polytope ${P}$ if ${P}$ is the image of ${Q}$ under a linear projection. Again, from an optimization stand point, lifts are important: If we can optimize linear functionals over ${Q}$, then we can optimize linear functionals over ${P}$. Define now ${\bar \gamma(P)}$ to be the minimal value of ${\gamma(Q)}$ over all lifts ${Q}$ of ${P}$. (The value ${\bar \gamma(P)}$ is sometimes called the (linear) extension complexity of ${P}$.)

1.3. The permutahedron

Here is a somewhat more interesting family of examples where lifts reduce complexity. The permutahedron ${\Pi_n \subseteq \mathbb R^n}$ is the convex hull of the vectors ${(i_1, i_2, \ldots, i_n)}$ where ${\{i_1, \ldots, i_n\} = \{1,\ldots,n \}}$. It is known that ${\gamma(\Pi_n)=2^n-2}$.

Let ${B_n \subseteq \mathbb R^{n^2}}$ denote the convex hull of the ${n \times n}$ permutation matrices. The Birkhoff-von Neumann theorem tells us that ${B_n}$ is precisely the set of doubly stochastic matrices, thus ${\gamma(B_n)=n^2}$ (corresponding to the non-negativity constraints on each entry).

Observe that ${\Pi_n}$ is the linear image of ${B_n}$ under the map ${A \mapsto A (1, 2, \ldots, n)^T}$, i.e. we multiply a matrix ${A \in B_n}$ on the right by the column vector ${(1, 2, \ldots, n)}$. Thus ${B_n}$ is a lift of ${\Pi_n}$, and we conclude that ${\bar \gamma(\Pi_n) \leq n^2 \ll \gamma(\Pi_n)}$.

1.4. The cut polytope

If ${P \neq NP}$, there are certain combinatorial polytopes we should not be able to optimize over efficiently. A central example is the cut polytope: ${\mathrm{CUT}_n \subseteq \mathbb R^{n^2}}$ is the convex hull of all matrices of the form ${(A_S)_{ij} = |\mathbf{1}_S(i)-\mathbf{1}_S(j)|}$ for some subset ${S \subseteq \{1,\ldots,n\}}$. Here, ${\mathbf{1}_S}$ denotes the characteristic function of ${S}$.

Note that the MAX-CUT problem on a graph ${G=(V,E)}$ can be encoded in the following way: Let ${W_{ij} = 1}$ if ${\{i,j\} \in E}$ and ${W_{ij}=0}$ otherwise. Then the value of the maximum cut in ${G}$ is precisely the maximum of ${\langle W, A\rangle}$ for ${A \in \mathrm{CUT}_n}$. Accordingly, we should expect that ${\bar \gamma(\mathrm{CUT}_n)}$ cannot be bounded by any polynomial in ${n}$ (lest we violate a basic tenet of complexity theory).

1.5. Non-negative matrix factorization

The key to understanding ${\bar \gamma(\mathrm{CUT}_n)}$ comes from Yannakakis’ factorization theorem.

Consider a polytope ${P \subseteq \mathbb R^d}$ and let us write in two ways: As a convex hull of vertices

$\displaystyle P = \mathrm{conv}\left(x_1, x_2, \ldots, x_n\right)\,,$

and as an intersection of half-spaces: For some ${A \in \mathbb R^{m \times d}}$ and ${b \in \mathbb R^m}$,

$\displaystyle P = \left\{ x \in \mathbb R^d : Ax \leq b \right\}\,.$

Given this pair of representations, we can define the corresponding slack matrix of ${P}$ by

$\displaystyle S_{ij} = b_i - \langle A_i, x_j \rangle \qquad i \in \{1,2,\ldots,m\}, j \in \{1,2,\ldots,n\}\,.$

Here, ${A_1, \ldots, A_m}$ denote the rows of ${A}$.

We need one more definition. In what follows, we will use ${\mathbb R_+ = [0,\infty)}$. If we have a non-negative matrix ${M \in \mathbb R_+^{m \times n}}$, then a rank-${r}$ non-negative factorization of ${M}$ is a factorization ${M = AB}$ where ${A \in \mathbb R_+^{m \times r}}$ and ${B \in \mathbb R_+^{r \times n}}$. We then define the non-negative rank of ${M}$, written ${\mathrm{rank}_+(M)}$, to be the smallest ${r}$ such that ${M}$ admits a rank-${r}$ non-negative factorization.

Theorem 1 (Yannakakis) For every polytope ${P}$, it holds that ${\bar \gamma(P) = \mathrm{rank}_+(S)}$ for any slack matrix ${S}$ of ${P}$.

The key fact underlying this theorem is Farkas’ Lemma. It asserts that if ${P = \{ x \in \mathbb R^d : Ax \leq b \}}$, then every valid linear inequality over ${P}$ can be written as a non-negative combination of the defining inequalities ${\langle A_i, x \rangle \leq b_i}$.

There is an interesting connection here to proof systems. The theorem says that we can interpret ${\bar \gamma(P)}$ as the minimum number of axioms so that every valid linear inequality for ${P}$ can be proved using a conic (i.e., non-negative) combination of the axioms.

1.6. Slack matrices and the correlation polytope

Thus to prove a lower bound on ${\bar \gamma(\mathrm{CUT}_n)}$, it suffices to find a valid set of linear inequalities for ${\mathrm{CUT}_n}$ and prove a lower bound on the non-negative rank of the corresponding slack matrix.

Toward this end, consider the correlation polytope ${\mathrm{CORR}_n \subseteq \mathbb R^{n^2}}$ given by

$\displaystyle \mathrm{CORR}_n = \mathrm{conv}\left(\left\{x x^T : x \in \{0,1\}^n \right\}\right)\,.$

It is an exercise to see that ${\mathrm{CUT}_{n+1}}$ and ${\mathrm{CORR}_n}$ are linearly isomorphic.

Now we identify a particularly interesting family of valid linear inequalities for ${\mathrm{CORR}_n}$. (In fact, it turns out that this will also be an exhaustive list.) A quadratic multi-linear function on ${\mathbb R^n}$ is a function ${f : \mathbb R^n \rightarrow \mathbb R}$ of the form

$\displaystyle f(x) = a_0 + \sum_i a_{ii} x_i + \sum_{i < j} a_{ij} x_i x_j\,,$

for some real numbers ${a_0}$ and ${\{a_{ij}\}}$.

Suppose ${f}$ is a quadratic multi-linear function that is also non-negative on ${\{0,1\}^n}$. Then “${f(x) \geq 0 \,\,\forall x \in \{0,1\}^n}$” can be encoded as a valid linear inequality on ${\mathrm{CORR}_n}$. To see this, write ${f(x)=\langle A,xx^T\rangle + a_0}$ where ${A=(a_{ij})}$. (Note that ${\langle \cdot,\cdot\rangle}$ is intended to be the standard inner product on ${\mathbb R^{n^2}}$.)

Now let ${\textrm{QML}^+_n}$ be the set of all quadratic multi-linear functions that are non-negative on ${\{0,1\}^n}$, and consider the matrix (represented here as a function) ${\mathcal M_n : \textrm{QML}^+_n \times \{0,1\}^n \rightarrow \mathbb R_+}$ given by

$\displaystyle \mathcal M_n(f,x) = f(x)\,.$

Then from the above discussion, ${\mathcal M_n}$ is a valid sub-matrix of some slack matrix of ${\mathrm{CORR}_n}$. To summarize, we have the following theorem.

Theorem 2 For all ${n \geq 1}$, it holds that ${\bar \gamma(\mathrm{CUT}_{n+1}) \geq \mathrm{rank}_+(\mathcal M_n)}$.

It is actually the case that ${\bar \gamma(\mathrm{CUT}_{n+1}) = \mathrm{rank}_+(\mathcal M_n)}$. The next post will focus on providing a lower bound on ${\mathrm{rank}_+(\mathcal M_n)}$.

# Entropy optimality: Forster’s isotropy

In this post, we will see an example of entropy optimality applied to a determinantal measure (see, for instance, Terry Tao’s post on determinantal processes and Russ Lyons’ ICM survey). I think this is an especially fertile setting for entropy maximization, but this will be the only post in this vein for now; I hope to return to the topic later.

Our goal is to prove the following theorem of Forster.

Theorem 1 (Forster) Suppose that ${x_1, x_2, \ldots, x_n \in \mathbb R^k}$ are unit vectors such that every subset of ${k}$ vectors is linearly independent. Then there exists a linear mapping ${A : \mathbb R^k \rightarrow \mathbb R^k}$ such that

$\displaystyle \sum_{i=1}^n \frac{(A x_i) (A x_i)^T}{\|A x_i\|^2} = \frac{n}{k} I\,. \ \ \ \ \ (1)$

This result is surprising at first glance. If we simply wanted to map the vectors ${\{x_1, \dots x_n\}}$ to isotropic position, we could use the matrix ${A = (\sum_{i=1}^n x_i x_i^T)^{-1/2}}$. But Forster’s theorem asks that the unit vectors

$\displaystyle \left\{ \frac{Ax_1}{\|A x_1\|}, \ldots, \frac{A x_n}{\|A x_n\|} \right\}$

are in isotropic position. This seems to be a much trickier task.

Forster used this as a step in proving lower bounds on the sign rank of certain matrices. Forster’s proof is based on an iterative argument combined with a compactness assertion.

There is another approach based on convex programming arising in the work of Barthe on a reverse Brascamp-Lieb inequality. The relation to Forster’s theorem was observed in work of Hardt and Moitra; it is essentially the dual program to the one we construct below.

We first recall a few preliminary facts about determinants. For any ${x_1, \ldots, x_n \in \mathbb R^k}$, we have the Cauchy-Binet formula

$\displaystyle \det\left(\sum_{i=1}^n x_i x_i^T\right) = \sum_{|S|=k} \det\left(\sum_{i \in S} x_i x_i^T\right)\,.$

We also have a rank-one update formula for the determinant: If a matrix ${A}$ is invertible, then

$\displaystyle \det\left(A+u u^T\right) = \det(A) \left(1+u^T A^{-1} u\right)\,.$

Finally, for ${k}$ vectors ${x_1, x_2, \ldots, x_k \in \mathbb R^k}$ and nonnegative coefficients ${c_1, c_2, \ldots, c_k \geq 0}$, we have

$\displaystyle \det\left(\sum_{i=1}^k c_i \,x_i x_i^T\right) = \left(\prod_{i=1}^k c_i\right) \det\left(\sum_{i=1}^k x_i x_i^T\right)\,.$

This follows because replacing ${x_i}$ by ${c_i x_i}$ corresponds to multiplying the ${i}$th row and column of ${XX^T}$ by ${\sqrt{c_i}}$, where ${X}$ is the matrix that has the vectors ${x_1, \ldots, x_k}$ as columns.

1.2. A determinantal measure

To prove Theorem 1, we will first define a probability measure on ${{[n] \choose k}}$, i.e., on the ${k}$-subsets of ${\{1,2,\ldots,n\}}$ by setting:

$\displaystyle D_S = \frac{\det\left(\sum_{i \in S} x_i x_i^T\right)}{\det\left(\sum_{i=1}^n x_i x_i^T\right)}\,.$

The Cauchy-Binet formula is precisely the statement that ${\sum_{|S|=k} D_S=1}$, i.e. the collection ${\{D_S\}}$ forms a probability distribution on ${{[n] \choose k}}$. How can we capture the fact that some vectors ${x_1, \ldots, x_n}$ satisfy ${\sum_{i=1}^n x_i x_i^T = \frac{n}{k} I}$ using only the values ${D_S}$?

Using the rank-one update formula, for an invertible ${k \times k}$ matrix ${B}$, we have ${\frac{\partial}{\partial t}\big|_{t=0} \log \det(B+t u u^T) = \langle u, B^{-1} u\rangle}$. Thus ${B}$ is the ${k \times k}$ identity matrix if and only if for every ${u \in \mathbb R^k}$,

$\displaystyle \frac{\partial}{\partial t}\left|_{t=0} \log \det(B+t uu^T)=\|u\|^2\,.\right.$

Note also that using Cauchy-Binet,

$\displaystyle \frac{\partial}{\partial t}\left|_{t=0}\vphantom{\bigoplus}\right. \log \det\left(\sum_{j=1}^n x_j x_j^T + t x_i x_i^T\right)\qquad\qquad\qquad$

$\displaystyle = \frac{\sum_{S : i \in S} \frac{\partial}{\partial t}\big|_{t=0} \det\left(\sum_{j \in S} x_j x_j^T + t x_i x_i^T\right)} {\det\left(\sum_{i=1}^n x_i x_i^T\right)}$

$\displaystyle = \frac{\sum_{S : i \in S} \frac{\partial}{\partial t}\big|_{t=0} (1+t)\det\left(\sum_{j \in S} x_j x_j^T\right)} {\det\left(\sum_{i=1}^n x_i x_i^T\right)} = \sum_{S : i \in S} D_S\,.$

In particular, if ${\sum_{i=1}^n x_i x_i^T = \frac{n}{k} I}$, then for every ${i=1,2,\ldots,n}$, we have

$\displaystyle \frac{k}{n} = \frac{\partial}{\partial t}\big|_{t=0} \log \det\left(\sum_{i=1}^n x_i x_i^T +t x_i x_i^T\right) = \sum_{S : i \in S} D_S\,. \ \ \ \ \ (2)$

Of course, our vectors ${x_1, \ldots, x_n}$ likely don’t satisfy this condition (otherwise we would be done). So we will use the max-entropy philosophy to find the “simplest” perturbation of the ${D_S}$ values that does satisfy it. The optimal solution will yield a matrix ${A}$ satisfying (1).

1.3. Entropy maximization

Consider the following convex program with variables ${\{p_S : S \subseteq [n], |S|=k\}}$.

$\displaystyle \textrm{minimize} \qquad \sum_{|S|=k} p_S \log \frac{p_S}{D_S}$

$\displaystyle \textrm{subject to} \qquad \sum_{S : i \in S} p_S = \frac{k}{n} \qquad \forall i=1,2,\ldots,n$

$\displaystyle \sum_{|S|=k} p_S = 1$

$\displaystyle p_S \geq 0 \qquad \forall |S|=k\,.$

In other words, we look for a distributon on ${[n] \choose k}$ that has minimum entropy relative to ${D_S}$, and such that all the “one-dimensional marginals” are equal (recall (2)). Remarkably, the optimum ${p^*_S}$ will be a determinantal measure as well.

Note that the uniform distribution on subsets of size ${k}$ is a feasible point and the objective is finite precisely because ${D_S \neq 0}$ for every ${S}$. The latter fact follows from our assumption that every subset of ${k}$ vectors is linearly independent.

1.4. Analyzing the optimizer

By setting the gradient of the Lagrangian to zero, we see that the optimal solution has the form

$\displaystyle p_S(\lambda_1, \ldots, \lambda_n) = \frac{\exp\left(\sum_{i \in S} \lambda_i \right) D_S}{\sum_{|S|=k} D_S \exp\left(\sum_{i \in S} \lambda_i\right)} = \frac{\exp\left(\sum_{i \in S} \lambda_i \right) D_S}{\det\left(\sum_{i=1}^n e^{\lambda_i} x_i x_i^T\right)} = \frac{\det\left(\sum_{i \in S} e^{\lambda_i} x_i x_i^T\right)}{\det\left(\sum_{i=1}^n e^{\lambda_i} x_i x_i^T\right)} \ \ \ \ \ (3)$

for some dual variables ${(\lambda_1, \ldots, \lambda_n)}$. Note that the ${\{\lambda_i\}}$ dual variables are unconstrained because they come from equality constraints.

Let us write ${U = \sum_{i=1}^n e^{\lambda_i} x_i x_i^T}$. We use ${p_S^*}$, ${U_*}$, and ${\{\lambda_i^*\}}$ to denote the values at the optimal solution. Using again the rank-one update formula for the determinant,

$\displaystyle \frac{\partial}{\partial \lambda_i} \log \det(U) = e^{\lambda_i} \langle x_i, U^{-1} x_i\rangle\,.$

But just as in (2), we can also use Cauchy-Binet to calculate the derivative (from the second expression in (3)):

$\displaystyle \frac{\partial}{\partial \lambda_i} \log \det(U) = \sum_{S : i \in S} p_S\,,$

where we have used the fact that ${\frac{\partial}{\partial \lambda_i} p_S = p_S}$ if ${i \in S}$ (and otherwise equals ${0}$). We conclude that

$\displaystyle \langle x_i, U_*^{-1} x_i\rangle = e^{-\lambda^*_i} \sum_{S : i \in S} p^*_S = e^{-\lambda^*_i} \frac{k}{n}\,.$

Now we can finish the proof: Let ${A = U_*^{-1/2}}$. Then:

$\displaystyle \sum_{i=1}^n \frac{(A x_i) (A x_i)^T}{\|A x_i\|^2} = U_*^{-1/2} \sum_{i=1}^n \frac{x_i x_i^T}{\langle x_i, U_*^{-1} x_i\rangle} U_*^{-1/2}$

$\displaystyle = U_*^{-1/2} \sum_{i=1}^n \frac{n}{k} e^{\lambda_i^*} x_i x_i^T U_*^{-1/2} = \frac{n}{k} I\,.$

# Entropy optimality: Bloom’s Chang’s Lemma

[Added in post: One might consult this post for a simpler/more general (and slightly more correct) version of the results presented here.]

In our last post regarding Chang’s Lemma, let us visit a version due to Thomas Bloom. We will offer a new proof using entropy maximization. In particular, we will again use only boundedness of the Fourier characters.

There are two new (and elementary) techniques here: (1) using a trade-off between entropy maximization and accuracy and (2) truncating the Taylor expansion of ${e^x}$.

We use the notation from our previous post: ${G=\mathbb F_p^n}$ for some prime ${p}$ and ${\mu}$ is the uniform measure on ${G}$. For ${\eta > 0}$ and ${f \in L^2(G)}$, we define ${\Delta_{\eta}(f) = \{ \alpha \in G : |\hat f(\alpha)| \geq \eta \|f\|_1 \}}$. We also use ${Q_{\mu} \subseteq L^2(G)}$ to denote the set of all densities with respect to ${\mu}$.

Theorem 1 (Bloom) There is a constant ${c > 0}$ such that for every ${\eta > 0}$ and every density ${f \in Q_{\mu}}$, there is a subset ${\Delta \subseteq \Delta_{\eta}(f) \subseteq G}$ such that ${|\Delta| \geq c \eta |\Delta_{\eta}(f)|}$ and ${\Delta}$ is contained in some ${\mathbb F_p}$ subspace of dimension at most

$\displaystyle \frac{1+\mathrm{Ent}_{\mu}(f)}{c\eta}\,.$

Note that we only bound the dimension of a subset of the large spectrum, but the bound on the dimension improves by a factor of ${1/\eta}$. Bloom uses this as the key step in his proof of what (at the time of writing) constitutes the best asymptotic bounds in Roth’s theorem on three-term arithmetic progressions:

Theorem 2 If a subset ${A \subseteq \{1,2,\ldots,N\}}$ contains no non-trivial three-term arithmetic progressions, then

$\displaystyle |A| \leq O(1) \frac{\left(\log \log N\right)^4}{\log N} N\,.$

This represents a modest improvement over the breakthrough of Sanders achieving ${\frac{(\log \log N)^{O(1)}}{\log N} N}$, but the proof is somewhat different.

1.1. A stronger version

In fact, we will prove a stronger theorem.

Theorem 3 For every ${\eta > 0}$ and every density ${f \in Q_{\mu}}$, there is a random subset ${\Delta \subseteq G}$ such that almost surely

$\displaystyle \dim_{\mathbb F_p}(\mathrm{span}\, \Delta) \leq 12 \frac{\mathrm{Ent}_{\mu}(f)}{\eta} + O(\log (1/\eta))\,,$

and for every ${\alpha \in \Delta_{\eta}(f)}$, it holds that

$\displaystyle \mathbb P[\alpha \in \Delta] \geq \frac{\eta}{4}\,.$

This clearly yields Theorem 1 by averaging.

1.2. The same polytope

To prove Theorem 3, we use the same polytope we saw before. Recall the class of test functionals ${\mathcal F = \{ \pm \mathrm{Re}\,\chi_\alpha, \pm \mathrm{Im}\,\chi_\alpha : \alpha \in G\} \subseteq L^2(G) }$

We defined ${P(f,\eta) \subseteq L^2(G)}$ by

$\displaystyle P(f,\eta) = \left\{ g \in L^2(G) : \langle g, \varphi \rangle \geq \langle f,\varphi\rangle - \eta\quad\forall \varphi \in \mathcal F\right\}\,.$

Let us consider a slightly different convex optimization:

$\displaystyle \textrm{minimize } \mathrm{Ent}_{\mu}(g)+K \mathbf{\varepsilon} \qquad \textrm{ subject to } g \in P(f,\mathbf{\varepsilon}) \cap Q_{\mu}\,. \ \ \ \ \ (1)$

Here, ${K}$ is a constant that we will set soon. On the other hand, ${\varepsilon \geq 0}$ is now intended as an additional variable over which to optimize. We allow the optimization to trade off the entropy term and the accuracy ${\varepsilon}$. The constant ${K > 0}$ represents how much we value one vs. the other.

Notice that, since ${f \in P(f,0) \cap Q_{\mu}}$, this convex program satisfies Slater’s condition (there is a feasible point in the relative interior), meaning that strong duality holds (see Section 5.2.3).

1.3. The optimal solution

As in our first post on this topic, we can set the gradient of the Lagrangian equal to zero to obtain the form of the optimal solution: For some dual variables ${\{\lambda^*_{\varphi} \geq 0: \varphi \in \mathcal F\} \subseteq \mathbb R}$,

$\displaystyle g^*(x) = \frac{\exp\left(\sum_{\varphi \in \mathcal F} \lambda^*_{\varphi} \varphi(x)\right)}{\mathop{\mathbb E}_{\mu} \left[\exp\left(\sum_{\varphi \in \mathcal F} \lambda^*_{\varphi} \varphi\right)\right]} \ \ \ \ \ (2)$

Furthermore, corresponding to our new variable ${\varepsilon}$, there is a new constraint on the dual variables:

$\displaystyle \sum_{\varphi \in \mathcal F} \lambda^*_{\varphi} \leq K\,.$

Observe now that if we put ${K = 2 \frac{\mathrm{Ent}_{\mu}(f)}{\eta}}$ then we can bound ${\varepsilon^*}$ (the error in the optimal solution): Since ${f}$ is a feasible solution with ${\varepsilon=0}$, we have

$\displaystyle \mathrm{Ent}_{\mu}(g^*) + K \varepsilon^* \leq \mathrm{Ent}_{\mu}(f)\,,$

which implies that ${\varepsilon^* \leq \frac{\mathrm{Ent}_{\mu}(f)}{K} = \frac{\eta}{2}}$ since ${\mathrm{Ent}_{\mu}(g^*) \geq 0}$.

To summarize: By setting ${K}$ appropriately, we obtain ${g^* \in P(f,\eta/2) \cap Q_{\mu}}$ of the form (2) and such that

$\displaystyle \sum_{\varphi \in \mathcal F} \lambda_{\varphi}^* \leq 2\frac{\mathrm{Ent}_{\mu}(f)}{\eta}\,. \ \ \ \ \ (3)$

Note that one can arrive at the same conclusion using the algorithm from our previous post: The version unconcerned with sparsity finds a feasible point after time ${T \leq \frac{\mathrm{Ent}_{\mu}(f)}{\varepsilon}}$. Setting ${\varepsilon = \eta/2}$ yields the same result without using duality.

1.4. A Taylor expansion

Let us slightly rewrite ${g^*}$ by multiplying the numerator and denominator by ${\exp\left({\sum_{\varphi \in \mathcal F} \lambda^*_{\varphi}}\right)}$. This yields:

$\displaystyle g^* = \frac{\exp\left(\sum_{\varphi \in \mathcal F} \lambda^*_{\varphi} (1+\varphi)\right)}{\mathop{\mathbb E}_{\mu} \exp\left(\sum_{\varphi \in \mathcal F} \lambda^*_{\varphi} (1+\varphi)\right)}$

The point of this transformation is that now the exponent is a sum of positive terms (using ${\|\varphi\|_{\infty} \leq 1}$), and furthermore by (3), the exponent is always bounded by

$\displaystyle B \stackrel{\mathrm{def}}{=} 4 \frac{\mathrm{Ent}_{\mu}(f)}{\eta}\,. \ \ \ \ \ (4)$

Let us now Taylor expand ${e^x = \sum_{j \geq 0} \frac{x^j}{j!}}$. Applying this to the numerator, we arrive at an expression

$\displaystyle g^* = \sum_{\vec \alpha} y_{\vec \alpha} T_{\vec \alpha}$

where ${y_{\vec \alpha} \geq 0}$, ${\sum_{\vec \alpha} y_{\vec \alpha} = 1}$, and each ${T_{\vec \alpha} \in Q_{\mu}}$ is a density. Here, ${\vec \alpha}$ ranges over all finite sequences of elements from ${\mathcal F}$ and

$\displaystyle T_{\vec \alpha} = \frac{\prod_{i=1}^{|\vec \alpha|} (1+\varphi_{\vec \alpha_i})}{\mathop{\mathbb E}_{\mu} \prod_{i=1}^{|\vec \alpha|} (1+\varphi_{\vec \alpha_i})}=\prod_{i=1}^{|\vec \alpha|} (1+\varphi_{\vec \alpha_i})\,,$

where we use ${|\vec \alpha|}$ to denote the length of the sequence ${\vec \alpha}$.

1.5. The random subset

We now define a random function ${\mathbf{T} \in L^2(G)}$ by taking ${\mathbf{T}=T_{\vec \alpha}}$ with probability ${y_{\vec \alpha}}$.

Consider some ${\gamma \in \Delta_{\eta}(f)}$. Since ${g^* \in P(f,\eta/2)}$, we know that ${\gamma \in \Delta_{\eta/2}(g^*)}$. Thus

$\displaystyle \frac{\eta}{2} < |\langle g^*, \chi_{\gamma}\rangle| \leq \sum_{\vec \alpha} y_{\vec \alpha} |\langle T_{\vec \alpha}, \chi_{\gamma}\rangle| = \mathop{\mathbb E}\,|\langle \mathbf{T},\chi_{\gamma}\rangle|\,.$

But we also have ${|\langle T_{\vec \alpha}, \chi_{\gamma}\rangle| \leq \|T_{\vec \alpha}\|_1 \cdot \|\chi_{\gamma}\|_{\infty} \leq 1}$. This implies that ${\mathbb P[|\langle \mathbf{T}, \chi_{\gamma}\rangle| > 0] > \eta/2}$.

Equivalently, for any ${\gamma \in \Delta_{\eta}(f)}$, it holds that ${\mathbb P[\gamma \in \Delta_0(\mathbf{T})] > \eta/2}$. We would be done with the proof of Theorem 3 if we also knew that ${\mathbf{T}}$ were supported on functions ${T_{\vec \alpha}}$ for which ${|\vec \alpha| \leq O(B)}$ because ${\dim_{\mathbb F_p}(\mathrm{span}(\Delta_0(T_{\vec \alpha}))) \leq |\vec \alpha|}$. This is not necessarily true, but we can simply truncate the Taylor expansion to ensure it.

1.6. Truncation

Let ${p_k(x) = \sum_{j \leq k} \frac{x^j}{j!}}$ denote the Taylor expansion of ${e^x}$ to degree ${k}$. Since the exponent in ${g^*}$ is always bounded by ${B}$ (recall (4)), we have

$\displaystyle \sum_{|\vec \alpha| > k} y_{\vec \alpha} \leq \sup_{x \in [0,B]} \frac{|e^x - p_k(x)|}{e^x} \leq \frac{B^{k+1}}{(k+1)!}\,.$

By standard estimates, we can choose ${k \leq 3 B + O(\log(1/\eta))}$ to make the latter quantity at most ${\eta/4}$.

Since ${\sum_{|\vec \alpha| > k} y_{\vec \alpha} \leq \eta/4}$, a union bound combined with our previous argument immediately implies that for ${\gamma \in \Delta_{\eta}(f)}$, we have

$\displaystyle \mathbb P\left[|\langle \mathbf{T}, \chi_{\gamma}\rangle| > 0 \textrm{ and } \dim_{\mathbb F_p}(\mathrm{span}(\Delta_0(\mathbf{T}))) \leq k \right] \geq \frac{\eta}{4}\,.$

This completes the proof of Theorem 3.

1.7. Prologue: A structure theorem

Generalizing the preceding argument a bit, one can prove the following.

Let ${G}$ be a finite abelian group and use ${\hat G}$ to denote the dual group. Let ${\mu}$ denote the uniform measure on ${G}$. For every ${\gamma \in \hat G}$, let ${\chi_{\gamma}}$ denote the corresponding character. Let us define a degree-${k}$ Reisz product to be a function of the form

$\displaystyle R(x) = \prod_{i=1}^k (1+\varepsilon_{i} \Lambda_i \chi_{\gamma_i}(x))$

for some ${\gamma_1, \ldots, \gamma_k \in \hat G}$ and ${\varepsilon_1, \ldots, \varepsilon_k \in \{-1,1\}}$ and ${\Lambda_1, \ldots, \Lambda_k \in \{\mathrm{Re},\mathrm{Im}\}}$.

Theorem 4 For every ${\eta > 0}$, the following holds. For every ${f : G \rightarrow \mathbb R_+}$ with ${\mathbb E_{\mu} [f]=1}$, there exists a ${g : G \rightarrow \mathbb R_+}$ with ${\mathbb E_{\mu} [g] = 1}$ such that ${\|\hat f - \hat g\|_{\infty} \leq \eta}$ and ${g}$ is a convex combination of degree-${k}$ Reisz products where

$\displaystyle k \leq O(1) \frac{\mathrm{Ent}_{\mu}(f)}{\eta} + O(\log (1/\eta))\,.$

1.8. A prologue’s prologue

To indicate the lack of algebraic structure required for the preceding statement, we can set things up in somewhat greater generality.

For simplicity, let ${X}$ be a finite set equipped with a probability measure ${\mu}$. Recall that ${L^2(X)}$ is the Hilbert space of real-valued functions on ${X}$ equipped with the inner product ${\langle f,g\rangle = \mathop{\mathbb E}_{\mu}[fg]}$. Let ${\mathcal F \subseteq L^2(X)}$ be a set of functionals with the property that ${\|\varphi\|_{\infty} \leq 1}$ for ${\varphi \in \mathcal F}$.

Define a degree-${k}$ ${\mathcal F}$-Riesz product as a function of the form

$\displaystyle R(x) = \prod_{i=1}^k (1+\varphi_i(x))$

for some functions ${\varphi_1, \ldots, \varphi_k \in \mathcal F}$. Define also the (semi-) norm ${\|f\|_{\mathcal F} = \sup_{\varphi \in \mathcal F} |\langle f,\varphi\rangle_{L^2(X)}|}$.

Theorem 5 For every ${\eta > 0}$, the following holds. For every ${f : X \rightarrow \mathbb R_+}$ with ${\mathbb E_{\mu} [f]=1}$, there exists a ${g : X \rightarrow \mathbb R_+}$ with ${\mathbb E_{\mu} [g] = 1}$ such that ${\|f - g\|_{\mathcal F} \leq \eta}$ and ${g}$ is a convex combination of degree-${k}$ ${\mathcal F}$-Riesz products where

$\displaystyle k \leq O(1) \frac{\mathrm{Ent}_{\mu}(f)}{\eta} + O(\log (1/\eta))\,.$