Entropy relative to convolution with a Gaussian

194 Views Asked by At

Denote by $H(\cdot||\cdot)$ the relative entropy (also known as KL divergence). If $X$ is a centered random variable and $G_{\sigma^2} \sim \mathcal{N}(0,\sigma^2)$ is a normal random variable with variance $\sigma^2$, it is not hard to show that the expression $$H(X||G_{\sigma^2}),$$ is minimized when $\sigma^2 = Var(X)$.

Now, let $Y$ be another random variable. What can be said about $$H(X||Y+G_{\sigma^2})?$$ Several interesting questions could be:

  1. For which $\sigma^2$ is the expression minimized?
  2. How will small changes in $\sigma^2$ change the value of $H(X||Y+G_{\sigma^2})?$

Any input or references will be most welcome.