I'm reading a book on Bayesian statistics and I came across the following:
Suppose that $\Psi = g(\Theta)$ and that $f$ is a density function. Then $f_\Psi(\psi) = f_\Theta(g^{-1}(\psi))\bigg|\dfrac{d}{d\psi}g^{-1}(\psi)\bigg|$.
Question: Why is this true? I can't find the appropriate theorem do justify this, and I can't figure it out myself.
Thanks in advance!



You have assumed $g$ is invertible. If $g$ is continuous, that can happen only if $g$ is strictly monotone. So here I will consider the case where $g$ is strictly increasing. In that case $g^{-1}$ is also strictly increasing, so $\dfrac d {d\psi} g^{-1}(\psi)\ge 0$ for all $\psi$ (possibly being equal to $0$ at some isolated points).
We have the c.d.f. $F_\Psi(\psi) = \Pr(\Psi \le \psi),$ and then \begin{align} f_\Psi(\psi) & = \frac d {d\psi} F_\Psi(\psi) = \frac d {d\psi} \Pr( g(\Theta) \le \psi) = \frac d {d\psi} \Pr(\Theta \le g^{-1}(\psi)) = \frac d {d\psi} F_\Theta(g^{-1}(\psi)) \\[10pt] & = f_\Theta(g^{-1}(\psi)) \cdot \frac d {d\psi} g^{-1}(\psi) \quad \text{by the chain rule.} \end{align} The decreasing case is done similarly; some inequalities get inverted.