Consider the family of normal distributions $$\{N( \mu,\sigma^2): \mu \in \mathbb{R}, \sigma >0\}$$ on $\mathbb{R}^n$. Its claimed in the book Statistical Inference by Silvey that if we restrict to the subfamily where $\mu = \sigma^2$, we have a nonzero function $\mathbb{R}^n\rightarrow \mathbb{R}$, namely $$f(x):= \overline{x}- \frac{1}{n-1}\sum_i(x_i-\overline{x})^2$$
with zero expectation w.r.t. every distribution in the subfamily. ($\overline{x}$ is supposed to be the average of the coordinates.) This is hard for me to see. Just in the two dimensional case I get a messy integral
$$\int_{\mathbb{R}^2} \bigg( \frac{x+y}{2} - \big[(x-\frac{x+y}{2})^2 - (y-\frac{x+y}{2})^2\big]\bigg) \cdot exp \big[\frac{-(x-\mu)^2-(y-\mu)^2}{2\mu}\big]dxdy$$
which even wolfram alpha doesn't want to do. Am I really supposed to hack through this integral? Or is this supposed to be more clear? I can see that in some sense $f$ is the difference of an average and a variance (although the $n-1$ rather than $n$ in the denominator puzzles me), but I don't know how to make sense of that.
In general if $X_i$ are an iid sample from a fixed distribution with finite variance, $\overline{X}=\frac{1}{n} \sum_i X_i,S^2=\frac{1}{n-1} \sum_i (X_i-\overline{X})^2$, then $E[\overline{X}]=\mu$ and $E[S^2]=\sigma^2$. In statistical language we say the sample mean and sample variance are unbiased estimators for the population mean and population variance respectively.
The unbiasedness of the sample mean is just a simple application of linearity.
The presence of the $n-1$ in the sample variance (instead of $n$ as you might expect) is called the Bessel correction. To derive it, first note that $S^2$ is translation invariant, so it suffices to consider the case $\mu=0$. Once you have done that, expand out the square. You have $E[X_i^2]-2E[X_i \overline{X}]+E[\overline{X}^2]$. The first term is just $\sigma^2$ (since we have assumed $\mu=0$). The third term is $\frac{\sigma^2}{n}$ by the independence. The interesting thing is that the second term is $-2\frac{\sigma^2}{n}$: each member of the sample has some positive correlation with the sample mean, which is not exactly cancelled by the $E[\overline{X}^2]$ term.
This is the first instance of the statistical concept of "degrees of freedom".