I am sorry for the bad heading. It was difficult to find it. What I am trying to calculate is the expectation value of (bias corrected) variance over mean squared for 2 random number drawn from the same normal distribution.
From a simple Monte-Carlo simulation I am pretty sure the solution should be:
$$<A> = \frac{\sigma ^2}{\mu ^2} $$
However, I did not find a proof online and was not able to calculate it myself. To find the expectation value one has to solve the integral
$$<A> = \frac{1}{2\pi \sigma ^2} \int \text{d}x_1\int \text d x_2 \frac{(x_1-x_2)^2}{(x_1+x_2)^2}\exp{\left(-\frac{(x_1-\mu ^2)}{2\sigma ^2}\right)}\exp{\left(-\frac{(x_2-\mu ^2)}{2\sigma ^2}\right)}$$
The beginning is pretty easy. I tried the substitution $$a=x_1-x^2\\b=x_1+x_2$$ to split the integral to $$<A> = \frac{1}{4\pi \sigma ^2} \exp(-\frac{\mu^2}{\sigma^2})\int \text{d}a~a^2 \exp{\left(-\frac{a^2}{4\sigma ^2} \right)} \int \text d b~b^{-2} exp{\left(-\frac{b^2-4b\mu}{4\sigma ^2}\right)}$$ While the first integral is not very complicated, the second is more problematic. There is a pole at $b=0$ but since, the Gaussian diverges for many points on the infinite circle, I cannot use the residue theorem. I was not able to find an antiderivative and I lack of plan C.
In order to get a nicer integral I tried to substitute $c = b+2\mu$ to get $$<A> = \frac{16 \sigma}{\sqrt{\pi}} \int \text d x~(x+2\mu)^{-2} exp{\left(-\frac{x^2}{4\sigma ^2}\right)}$$
I would be glad about every help
Edit:
I do not longer think, that my expected result is true. This should only be the limit for $|\mu|>>\sigma$
$X_1 \sim \mathcal{N}(\mu_1,\sigma_1^2)$ and $X_2 \sim \mathcal{N}(\mu_2, \sigma_2^2)$ and $\operatorname{Cor}(X_1,X_2) = \rho$. We will compute an approximation of,
$$ \mathbb{E}\left(\frac{X_1-X_2}{X_1+X_2}\right)^{2} $$
Let's try using delta method. Suppose $g(t_1,t_2) = \frac{t_1-t_2}{t_1+t_2}$. The first order taylor approximation of this function about $\mu_1,\mu_2$ will be,
$$ g(\mathbf{t}) \approx g(\boldsymbol{\mu}) + \sum_{i=1}^{2}(t_i-\mu_i)\frac{\partial g(\mathbf{t})}{\partial t_i}\bigg\vert_{\mathbf{t} = \boldsymbol{\mu}} $$
where,
$$ g(\boldsymbol{\mu}) = \frac{\mu_1-\mu_2}{\mu_1+\mu_2}, \ \ \ \ \frac{\partial g(\mathbf{t})}{\partial t_1} \bigg\vert_{\mathbf{t} = \boldsymbol{\mu}} = \frac{2\mu_2}{(\mu_1+\mu_2)^2}, \ \ \ \ \frac{\partial g(\mathbf{t})}{\partial t_2} \bigg\vert_{\mathbf{t} = \boldsymbol{\mu}} = \frac{-2\mu_1}{(\mu_1+\mu_2)^2} $$
Put $t_1 = X_1$ and $t_2 = X_2$,
$$ \mathbb{E}g(\mathbf{t}) \approx g(\boldsymbol{\mu}) + 0 = \frac{\mu_1-\mu_2}{\mu_1+\mu_2} $$
and,
$$ \begin{align} \operatorname{Var}g(\mathbf{t}) &= \mathbb{E}(g(\mathbf{t}) - g(\boldsymbol{\mu}))^2 \approx \sum_{i=1}^{2}\sum_{j=1}^{2}\operatorname{Cov}(t_i-\mu_i, t_j-\mu_j)\frac{\partial g(\mathbf{t})}{\partial t_i}\bigg\vert_{\mathbf{t} = \boldsymbol{\mu}}\frac{\partial g(\mathbf{t})}{\partial t_j}\bigg\vert_{\mathbf{t} = \boldsymbol{\mu}} \\ &= \sigma_1^2\left(\frac{\partial g(\mathbf{t})}{\partial t_1}\right)^2\bigg\vert_{\mathbf{t} = \boldsymbol{\mu}} + \sigma_2^2\left(\frac{\partial g(\mathbf{t})}{\partial t_2}\right)^2\bigg\vert_{\mathbf{t} = \boldsymbol{\mu}} + 2\rho\sigma_1\sigma_2\frac{\partial g(\mathbf{t})}{\partial t_1}\bigg\vert_{\mathbf{t} = \boldsymbol{\mu}}\frac{\partial g(\mathbf{t})}{\partial t_2}\bigg\vert_{\mathbf{t} = \boldsymbol{\mu}} \\ &= \frac{4(\mu_1^2\sigma_1^2 + \mu_2^2\sigma_2^2 - 2\rho\sigma_1\sigma_2\mu_1\mu_2)}{(\mu_1+\mu_2)^4} \end{align} $$
Finally,
$$ \mathbb{E}g(\mathbf{t})^2 = \operatorname{Var}g(\mathbf{t}) + (\mathbb{E}g(\mathbf{t}))^2 \approx \frac{4(\mu_1^2\sigma_1^2 + \mu_2^2\sigma_2^2 - 2\rho\sigma_1\sigma_2\mu_1\mu_2) + (\mu_1-\mu_2)^2(\mu_1+\mu_2)^2}{(\mu_1+\mu_2)^4} $$
Now, put $\mu_1 = \mu_2 = \mu$, $\sigma_1 = \sigma_2 = \sigma$ and $\rho = 0$, to get,
$$ \mathbb{E}\left(\frac{X_1-X_2}{X_1+X_2}\right)^{2} \approx \frac{\sigma^2}{2\mu^2} $$