Distance in the metric induced by the Fisher information matrix

567 Views Asked by At

From this (Section 2.1.2) paper I quote:

"The Fisher information matrix defines a (Riemannian) metric on $\Theta$: the distance in this metric, between two very close values of $\theta$ is given by the square root of twice the Kullback-Leibler divergence"

I do not understand how this result comes about. Why does the FIM matrix induces specifically

$d(\theta_0,\theta_1) = \sqrt{2 KL(P_{\theta_0},P_{\theta_1})}$

as a formula for the distance?

P.S.: However I do understand the problems of using the KL-divergence as a distance measure.

1

There are 1 best solutions below

3
On BEST ANSWER

Making a taylor series $f(\mathbf{x}) = f(\mathbf{a}) + (\mathbf{x} - \mathbf{a})^\mathsf{T} D f(\mathbf{a}) + \frac{1}{2!} (\mathbf{x} - \mathbf{a})^\mathsf{T} D^2 f(\mathbf{a}) (\mathbf{x} - \mathbf{a}) + \cdots$ of the Kullback–Leibler divergence in the variable $\widehat{\theta}$ around $\theta$ you get

$D_\text{KL}(\theta\parallel\widehat{\theta})\approx D_\text{KL}(\theta\parallel \widehat{\theta})|_{\widehat{\theta}=\theta}+(\widehat{\theta}-\theta)^\mathsf{T}\frac{\partial D_\text{KL}(\theta\parallel \widehat{\theta})}{\partial\widehat{\theta}}|_{\widehat{\theta}=\theta}+\frac{1}{2}(\widehat{\theta}-\theta)^\mathsf{T}\frac{\partial^2 D_\text{KL}(\theta\parallel \widehat{\theta})}{\partial\widehat{\theta}\partial\widehat{\theta}}|_{\widehat{\theta}=\theta}(\widehat{\theta}-\theta)$

and we can see that the first two terms will be zero and the last one will be the fisher information matrix,

$(a)\quad D_\text{KL}(\theta\parallel \widehat{\theta})|_{\widehat{\theta}=\theta}=\int p(x; \theta)\ln\frac{p(x;\theta)}{p(x; \widehat{\theta})} dx|_{\widehat{\theta}=\theta}=\int p(x; \theta)\ln\frac{p(x;\theta)}{p(x; \theta)} dx=\int p(x; \theta)\ln(1) dx=0$

$(b)\quad \frac{\partial D_\text{KL}(\theta\parallel \widehat{\theta})}{\partial\widehat{\theta}}|_{\widehat{\theta}=\theta}= \frac{\partial}{\partial\widehat{\theta}}\int p(x; \theta)\ln\frac{p(x;\theta)}{p(x; \widehat{\theta})} dx|_{\widehat{\theta}=\theta} = \frac{\partial}{\partial\widehat{\theta}}\int p(x; \theta)(\ln p(x;\theta) - \ln p(x; \widehat{\theta})) dx|_{\widehat{\theta}=\theta}=-\int p(x; \theta)\frac{\frac{\partial}{\partial\widehat{\theta}} p(x; \widehat{\theta})}{p(x; \widehat{\theta})} dx|_{\widehat{\theta}=\theta}=-\int \frac{\partial}{\partial\widehat{\theta}} p(x; \widehat{\theta})dx|_{\widehat{\theta}=\theta}=-\frac{\partial}{\partial\widehat{\theta}} \int p(x; \widehat{\theta})dx|_{\widehat{\theta}=\theta}=-\frac{\partial}{\partial\theta} \int p(x; \theta)dx=-\frac{\partial}{\partial\theta} 1=0$

$(c)\quad\frac{\partial^2 D_\text{KL}(\theta\parallel \widehat{\theta})}{\partial\widehat{\theta}\partial\widehat{\theta}}|_{\widehat{\theta}=\theta}=\frac{\partial^2}{\partial\widehat{\theta}\partial\widehat{\theta}}\int p(x; \theta)\ln\frac{p(x;\theta)}{p(x; \widehat{\theta})} dx|_{\widehat{\theta}=\theta}=\frac{\partial^2}{\partial\widehat{\theta}\partial\widehat{\theta}}\int p(x; \theta)(\ln p(x;\theta)-\ln p(x; \widehat{\theta})) dx|_{\widehat{\theta}=\theta}=-\int p(x; \theta)\frac{\partial^2}{\partial\widehat{\theta}\partial\widehat{\theta}}\ln p(x; \widehat{\theta}) dx|_{\widehat{\theta}=\theta}=-\int p(x; \theta)\frac{\partial^2}{\partial\theta\partial\theta}\ln p(x; \theta) dx={\cal I(\theta)}$

So using (a)+(b)+(c) you obtain that

$D_\text{KL}(\theta\parallel\widehat{\theta})\approx \frac{1}{2}(\widehat{\theta}-\theta)^\mathsf{T}{\cal I(\theta)}(\widehat{\theta}-\theta)$

Therefore $$d_{\text{KL}(\theta\parallel\widehat{\theta})}(\widehat{\theta},\theta)=\sqrt{2 D_\text{KL}(\theta\parallel\widehat{\theta})}\approx\sqrt{(\widehat{\theta}-\theta)^\mathrm{T}{{\cal I(\theta)}}(\widehat{\theta}-\theta)}=||\widehat{\theta}-\theta||_{{\cal I(\theta)}}^{\frac 1 2}=d_{\cal I(\theta)}(\widehat{\theta},\theta)$$

where $d_{\cal I(\theta)}(\widehat{\theta},\theta)$ is the metric defined by the fisher information matrix


As pointed by @user1936752 while the fisher information matrix is symmetric because is a metric $d_{\cal I(\theta)}(\widehat{\theta},\theta)=d_{\cal I(\theta)}(\theta,\widehat{\theta})$ the Kulback-Leibler divergence is not a metric as $D_\text{KL}(\theta\parallel\widehat{\theta})\neq D_\text{KL}(\widehat{\theta}\parallel\theta)$ so we have that $d_{\cal I(\theta)}(\widehat{\theta},\theta)\neq d_{\cal I(\widehat{\theta})}(\widehat{\theta},\theta)$ because

$$d_{\cal I(\theta)}(\widehat{\theta},\theta)\approx d_{\text{KL}(\theta\parallel\widehat{\theta})}(\widehat{\theta},\theta)\neq d_{\text{KL}(\widehat{\theta}\parallel\theta)}(\widehat{\theta},\theta)\approx d_{\cal I(\widehat{\theta})}(\widehat{\theta},\theta)$$


Hope it helps