Say we have a normally distributed variable $X \sim \mathcal{N}(\mu, \sigma^2)$. The Fisher information for $\mu$ is $\mathcal{I}(\mu) = \frac{1}{\sigma^2}$.
But if the variable is distributed as $X \sim \mathcal{N}(\alpha - \beta, \sigma^2)$, (and if the $\alpha$ and $\sigma$ are known) what is the Fisher information $\mathcal{I}(\beta)$ for $\beta$?
Is it also $\frac{1}{\sigma^2}$ like for $\mu$, or would it take on some other form?
It will be the same.
Recall that, given an unbiased estimator of $\beta$, $1/I(\beta)$ is the lowest possible variance that it can have. $1/I(\beta)$ is called the Cramer Rao lower bound (CRLB).
Now imagine if $I(\mu) \neq I(\beta)$. Note that we're able to observe samples from $\mathcal N(\mu, \sigma^2)$, artificially add a bias to it (by adding $\alpha$) to obtain a sample from $\mathcal N(\mu + \alpha, \sigma^2)$. We're claiming that this process either limits or improves our ability to estimate $\mu$, which does not seem right! Our ability to estimate $\mu$ should not have changed, because we can add/subtract $\alpha$ to move back and forth between the two distributions.
The algebraic reason for why $I(\mu) = I(\beta)$ is because you take a second derivative when computing the Fisher information. See Steven M. Kay's book on Estimation Theory, sec. 3.4.