K.P. Murphy writes that this is because:
The MLE does not suffer from this since the likelihood is a function, not a probability density."
I'm not sure I understand why or how that distinction matters, and it would be nice to get a more intuitive explanation.
The likelihood is a probability density in the data space $p(x|\theta)$, but it's a function, $L(\theta|X)$, not a probability density in the parameter space. This means that there is no constraint on the integral over all parameter values. In contrast, any probability density in the parameter space, such as the posterior, $p(\theta|x)$, is subject to a specific constraint, i.e. that the integral over any interval should remain the same after parameter reparameterization, i.e that: $$\int_A p(y)dy = \int_A p(x)dx$$ This constraint, "forces" the probability density to change when the parameters are reparameterized, since just leaving the same function values would not guarantee the integral condition to remain the same. In contrast, the MLE does not suffer from this problem - the functions values simply remain the same, so that the maximum remains the same as well.