I am collecting concentration estimates (and the associated standard deviations) for proteins from a large number of papers. Most of the estimates are measured on a normal scale, but my use of them requires a log scale. I have found methods to approximate the log scale values from the regular scale, which are close enough for my usage.
Occasionally, I find a paper which measures a geometric mean - which is fantastic! I can take a log and no approximation is required. However, in some of these cases, I do not know how to interpret the reported standard deviation. Case in point - an estimated (geometric) mean of 0.40 with a standard deviation of 0.10.
The authors are clearly not reported a geometric standard deviation: Using the formula I've seen posted in wikepedia (https://en.wikipedia.org/wiki/Geometric_standard_deviation), the geometric standard deviation is the exponent of the standard deviation on the log scale. So to get it "back" to the log scale, I would need to take a log - except in this case, $\log(0.10) < 0$.
I suppose it's possible they calculated a the standard deviation on the un-transformed data, but (1) it seems to be a very strange choice to make and (2) most estimates I've seen which report a mean and standard deviation on a regular scale have a standard deviation close in size to the mean (e.g. the standard deviation should be around 0.4 or slightly higher due to the difference in how this mean was calculated relative to most). This isn't "proof" per se- just an observation I can make after studying hundreds of compiled concentrations estimates.
My question: Is there any other (standard) way to interpret a standard deviation of 0.10 in this case? When I work with log-normal data, I typically give a point estimate and a confidence interval to avoid this problem - so it's not something I've run into before.
I know the calculations were done in SPSS, if that helps.