I have a variable X that is distributed log-normally.
I let Y = lnX ~ N($\mu$, $\sigma^2$) and I've been given that $\sigma$=0.3, $\bar{y}$ = 0.12 and n = 40.
So I find a confidence interval for the mean of the log-transformed data like this:
$(\bar{y}-z_{1-\alpha/2}\times\frac{\sigma}{\sqrt n}, \bar{y}+z_{1-\alpha/2}\times\frac{\sigma}{\sqrt n})\\ (0.12-1.96\times\frac{0.3}{\sqrt 40}, 0.12+1.96\times\frac{0.3}{\sqrt 40})\\ (0.027, 0.213)$
To get the 95% confidence interval for E(X) (the original variable) I just raise e to the power of the endpoints of the interval I just calculated.
so the interval would be $(e^{0.027}, e^{0.213})=\\$ $(1.03, 1.24)$
Is this correct?
Thanks any help appreciated
Quoting https://ww2.amstat.org/publications/jse/v13n1/olsson.html using $\theta=EX$
The method outlined by OP is this biased method.
The paper offers a few alternatives including Cox Method. If, as in OP's case $\sigma^2$ is known, a modification should be made to the Cox method. This modification would be to add $(1/2)\sigma^2$ to both ends of the confidence interval for $\mu$ and then take anti-log's.
The end of the paper also provides simulation studies to assess the coverage of the confidence intervals under each method.