1. Construction of Föllmer’s drift
In a previous post, we saw how an entropy-optimal drift process could be used to prove the Brascamp-Lieb inequalities. Our main tool was a result of Föllmer that we now recall and justify. Afterward, we will use it to prove the Gaussian log-Sobolev inequality.
where is a progressively measurable drift and such that has law .
where the minima are over all processes of the form (1).
thus we need only exhibit a drift achieving equality.
where is the Brownian semigroup defined by
We are left to show that has law and .
We will prove the first fact using Girsanov’s theorem to argue about the change of measure between and . As in the previous post, we will argue somewhat informally using the heuristic that the law of is a Gaussian random variable in with covariance . Itô’s formula states that this heuristic is justified (see our use of the formula below).
The following lemma says that, given any sample path of our process up to time , the probability that Brownian motion (without drift) would have “done the same thing” is .
Remark 1 I chose to present various steps in the next proof at varying levels of formality. The arguments have the same structure as corresponding formal proofs, but I thought (perhaps naïvely) that this would be instructive.
then under the measure given by
the process has the same law as .
Proof: We argue by analogy with the discrete proof. First, let us define the infinitesimal “transition kernel” of Brownian motion using our heuristic that has covariance :
We can also compute the (time-inhomogeneous) transition kernel of :
Here we are using that and is deterministic conditioned on the past, thus the law of is a normal with mean and covariance .
To avoid confusion of derivatives, let’s use for the density of and for the density of Brownian motion (recall that these are densities on paths). Now let us relate the density to the density . We use here the notations to denote a (non-random) sample path of :
where the last line uses .
Now by “heuristic” induction, we can assume , yielding
In the last line, we used the fact that is the infinitesimal transition kernel for Brownian motion.
From Lemma 2, it will follow that has the law where is the law of . In particular, has the law which was our first goal.
Given our preceding less formal arguments, let us use a proper stochastic calculus argument to establish (3). To do that we need a way to calculate
Notice that this involves both time and space derivatives.
Itô’s lemma. Suppose we have a continuously differentiable function that we write as where is a space variable and is a time variable. We can expand via its Taylor series:
Normally we could eliminate the terms , etc. since they are lower order as . But recall that for Brownian motion we have the heuristic . Thus we cannot eliminate the second-order space derivative if we plan to plug in (or , a process driven by Brownian motion). Itô’s lemma says that this consideration alone gives us the correct result:
This generalizes in a straightforward way to the higher dimensional setting .
With Itô’s lemma in hand, let us continue to calculate the derivative
For the time derivative (the first term), we have employed the heat equation
where is the Laplacian on .
Note that the heat equation was already contained in our “infinitesimal density” in the proof of Lemma 2, or in the representation , and Itô’s lemma was also contained in our heuristic that has covariance .
Using Itô’s formula again yields
giving our desired conclusion (3).
Our final task is to establish optimality: . We apply the formula (3):
where we used . Combined with (2), this completes the proof of the theorem.
2. The Gaussian log-Sobolev inequality
First, we discuss the correct way to interpret this. Define the Ornstein-Uhlenbeck semi-group by its action
This is the natural stationary diffusion process on Gaussian space. For every measurable , we have
The log-Sobolev inequality yields quantitative convergence in the relative entropy distance as follows: Define the Fisher information
One can check that
thus the Fisher information describes the instantaneous decay of the relative entropy of under diffusion.
So we can rewrite the log-Sobolev inequality as:
This expresses the intuitive fact that when the relative entropy is large, its rate of decay toward equilibrium is faster.
Martingale property of the optimal drift. Now for the proof of (5). Let be the entropy-optimal process with . We need one more fact about : The optimal drift is a martingale, i.e. for .
Let’s give two arguments to support this.
Argument one: Brownian bridges. First, note that by the chain rule for relative entropy, we have:
But from optimality, we know that the latter expectation is zero. Therefore -almost surely, we have
This implies that if we condition on the endpoint , then is a Brownian bridge (i.e., a Brownian motion conditioned to start at and end at ).
This implies that , as one can check that a Brownian bridge with endpoint is described by the drift process , and
That seemed complicated. There is a simpler way to see this: Given and any bridge from to , every “permutation” of the infinitesimal steps in has the same law (by commutativity, they all land at ). Thus the marginal law of at every point should be the same. In particular,
Argument two: Change of measure. There is a more succinct (though perhaps more opaque) way to see that is a martingale. Note that the process is a Doob martingale. But we have and we also know that is precisely the change of measure that makes into Brownian motion.
The latter quantity is . In the last equality, we used the fact that is precisely the change of measure that turns into Brownian motion.