Weird logarithm rules in an attempt to proof an upper bound of JSD between two Gaussian distribution

90 Views Asked by At

I'm currently working on my thesis in Deep Learning and stumbled upon one paper that I think is really related to my topic. In short, I could not understand some parts of its computation process. Here's the link to the part of the equation that I don't understand:

The Screenshot of the part of the paper

I cannot understand the transition from Eq.20->21->22. My current guess, for now, is the Eq. 21 is missing its bracket around the

$\log \frac{q(x)}{p(x)} + \log \frac{r(x)}{p(x)} $

But even if it's true, I don't understand the transformation from the Eq. 21 to 22. Can anyone help? Thanks in advance!

1

There are 1 best solutions below

6
On

Note that it is an inequality (not an equality). In fact, it uses the convexity of the negative $\log$ function.

In generate, for a convex function $f$ you have $$ f\left(\frac{a+b}{2}\right) \le \frac{f(a)+f(b)}{2}. $$

And yes, the brackets are missing in the text.