I am trying to simulate a log-normal random variable $Y$ with mean $m = \mathbb{E}[Y] = 0.001$ and standard deviation $s = 0.094$ by simulating a normal sample instead, and then exponentiating it.
From definition of log-normal distribution, if we let $X = \ln Y$ then $X \sim \mathcal{N}(\mu, \sigma^2)$, where $$ \sigma^2 = \ln \left(\frac{s^2}{m^2}+1\right) \text{ and } \mu = \ln(m) - \frac{\sigma^2}{2}, $$ which in my case seems to imply $$ \mu \approx -11.4511, \sigma \approx 3.0144. $$
So I simulate a large normally distributed sample (1 mln points), which comes out with sample mean and standard deviation of approximately $-11.448$ and $3.01435$, which is very nice.
Then, I exponentiate every entry in the sample, expecting to get a log-normal sample with my original parameters. The mean and standard deviation of this exponentiated sample are $0.0010199$ and $0.055741$. So the mean comes in very close, but the standard deviation is way off the mark.
I tried multiple experiments, and it seems the mean is matched, but the standard deviation is not. I implemented both in Excel and in Python (happy to post the code if needed) but I don't understand why I cannot match the standard deviation in the log-normal sample.
It could be it requires more points to converge, but 1 mln points seems like a very large number to at least get 2-3 digits after the decimal point correctly?
I would appreciate any ideas. Thank you very much.
I think the particular values of $m$ and $s$ you are using give rise to unexpected computational difficulties. Perhaps it is just too much to wish for two-place accuracy retrieving $s$ with a million simulated values. Also, taking the ordinary SD of lognormal data may not be the optimal way to estimate parameter $s$ (especially when it is small). I'm not absolutely sure of any of this because I have not taken the time to track exactly where it goes wrong.
However, if $m = .1$ and $s = .4$, your method seems to work just fine. Here is a brief session in R:
Searching the internet for 'lognormal distribution', I found a MathWorks page that says (in your notation) that a lognormal distribution with mean $m$ and variance $s^2$ has $\mu = \ln(m^2/\sqrt{s^2 + m^2})$ and $\sigma = \sqrt{\ln(1+ s^s/m^2)}.$ Maybe it is my imagination, but for various values of $m$ and $s$ these computations seemed a little more stable getting $\mu$ and $\sigma$. (But that still does not cure your problem.)
R generates lognormal samples directly with the function
rlnorm. Here are some related results.