I can see that almost the same question was asked here, but I didn't really understand the answers, so I was wondering if someone could help me get a better grip on it. Here is the proof I came up with--can someone explain where the mistake is? And if you rely on some axiom or theorem to do so, could you explain that theorem, or link to a good explanation? My background in upper-level math is pretty spotty :/.
Proof:
Let $X$ be a normally distributed random variable with mean $\mu$ and standard deviation $\sigma$, and let $f(x)$ be the probability density function of $X$. Let $Y=e^X$ be another random variable. Call its probability density function $g(y)$.
Let $x$ be a given possible value of $X$. Then $x$ has a probability density of $f(x)$.
Let $y = e^x$ be a given value of $Y$ which corresponds to $x$. Since $x$ has a probability density $f(x)$, $y$ must also have a probability density of $f(x)$. Then it follows that $$f(x) = g(y) \implies f(x) = g(e^x) \implies f(\ln x) = g(x)$$
and when you compute $g(x)$ using this information, you get the correct answer, but without an extra $x$ in the denominator outside the integral.
I know that the correct way to do this derivation involves using the CDFs instead. I tried to work through that proof and got stuck; here's what I managed so far:
The cumulative distribution functions for $X$ and $Y$ $F(x)$ and $G(y)$, respectively. For any given $x$ and $y$, then,
$$F(x) = \int_{-\infty}^{x} f(t)\,dt$$ $$G(y) = \int_{-\infty}^{y} g(t)\,dt$$ $$F(x) = G(e^x) \implies (F(\ln x) = G(x)) \land (f(\ln x) = g(x))\implies \int_{-\infty}^{\ln x} f(\ln t)\,dt = \int_{-\infty}^{x}g(t)\,dt$$
But I don't think that gives any new information! I did figure out how to do the proof correctly: we know the CDF of a normal distribution is $F(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{x} e^{-t^2/2}\,dt$. So the CDF of $Y$ should be $F(\ln x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\ln x} e^{-t^2/2}\,dt$, and then $g(x)$ should be the derivative of that, which is $\frac{1}{x} f'(x)$, which is the correct answer. But I don't understand why I get a different answer! Shouldn't both methods yield the same thing?
I'm not quite sure if I have understood your question correctly, but it seems to me that you are confused as to why you cannot simply substitute $y = e^x$ in the density of $X$ to arrive at the density of $Y$. You should not think of the density as a probability but rather a function whose integral gives the probabilities that define the distribution, namely those of the form $P(X \leq x)$. Hence, $f( \ln(x))$ is the value of the density of $X$ at $\ln(x)$. And this is where your argument is faulty. You claim that "$y$ must also have a probability density of $f(x)$", by which I suppose you mean that $g(y)$ must be $f(u(x))$ for some function $u$, but that is not the case. To find $g(y)$ you need to consider probabilities. Here is how you would go about doing so: Firstly, suppose $X \sim N( \mu, \sigma^2)$ and let $Y = e^X$. Let $f_X$ denote the density of $X$, $f_Y$ the density of $Y$ and similarly for the distributions $F_X$ and $F_Y$. We have
\begin{align*} F_Y (y) &= P(Y \leq y) = P(e^X \leq y) = P(X \leq \ln(y)) = F_X(\ln(y)) \\ &=\frac{1}{\sqrt{2\pi} \sigma} \int_{-\infty}^{\ln(y)} \exp \bigg(- \frac{1}{2} \big( \frac{x-\mu}{\sigma} \big)^2 \bigg) dx \end{align*}
At this point you want to use that $f_Y(y) = F_Y'(y)$ and the Fundamental Theorem of Calculus to find the derivative. Using this in combination with the Chain Rule yields
\begin{align*} f_Y(y) &= \frac{d}{dy} F_Y(y) =\frac{1}{\sqrt{2\pi} \sigma} \exp \bigg(- \frac{1}{2} \big( \frac{\ln(y)-\mu}{\sigma} \big)^2 \bigg) \frac{1}{y} \\ &= \frac{1}{y\sqrt{2\pi} \sigma} \exp \bigg(- \frac{(\ln(y)-\mu)^2}{2\sigma^2} \bigg), \end{align*}
which is the density of $Y$.