Recall that our goal is to sketch a proof of the following theorem, where the notation is from the last post. I will assume a knowledge of the three posts on polyhedral lifts and non-negative rank, as our argument will proceed by analogy.
In this post, we will see how John’s theorem can be used to transform a psd factorization into one of a nicer analytic form. Using this, we will be able to construct a convex body that contains an approximation to every non-negative matrix of small psd rank.
1.1. Finite-dimensional operator norms
Let denote a finite-dimensional Euclidean space over equipped with inner product and norm . For a linear operator , we define the operator, trace, and Frobenius norms by
Let denote the set of self-adjoint linear operators on . Note that for , the preceding three norms are precisely the , , and norms of the eigenvalues of . For , we use to denote that is positive semi-definite and for . We use for the set of density operators: Those with and .
One should recall that is an inner product on the space of linear operators, and we have the operator analogs of the Hölder inequalities: and .
1.2. Rescaling the psd factorization
As in the case of non-negative rank, consider finite sets and and a matrix . For the purposes of proving a lower bound on the psd rank of some matrix, we would like to have a nice analytic description.
To that end, suppose we have a rank- psd factorization
where and . The following result of Briët, Dadush and Pokutta (2013) gives us a way to “scale” the factorization so that it becomes nicer analytically. (The improved bound stated here is from an article of Fawzi, Gouveia, Parrilo, Robinson, and Thomas, and we follow their proof.)
Proof: Start with a rank- psd factorization . Observe that there is a degree of freedom here, because for any invertible operator , we get another psd factorization .
Let and . Set . We may assume that and both span (else we can obtain a lower-rank psd factorization). Both sets are bounded by finiteness of and .
where denotes the unit ball in the Euclidean norm. Let us now set and .
Eigenvalues of : Let be an eigenvector of normalized so the corresponding eigenvalue is . Then , implying that (here we use that for any ). Since , (2) implies that . We conclude that every eigenvalue of is at most .
Finally, observe that for any and , we have
1.3. Convex proxy for psd rank
Again, in analogy with the non-negative rank setting, we can define an “analytic psd rank” parameter for matrices :
Note that we have implicit equipped and with the uniform measure. The main point here is that can be arbitrary. One can verify that is convex.
And there is a corresponding approximation lemma. We use and .
Lemma 3 For every non-negative matrix and every , there is a matrix such that and
Using Lemma 2 in a straightforward way, it is not particularly difficult to construct the approximator . The condition poses a slight difficulty that requires adding a small multiple of the identity to the LHS of the factorization (to avoid a poor condition number), but this has a correspondingly small effect on the approximation quality. Putting “Alice” into “isotropic position” is not essential, but it makes the next part of the approach (quantum entropy optimization) somewhat simpler because one is always measuring relative entropy to the maximally mixed state.