I have the following problem:
One wants to estimate the expectation of a random variable X. A set of 16 data values (i.e. simulation outputs) is given, and one should determine roughly how many additional values should be generated for the standard deviation of the estimate to be less than 0.1.
If $k$ is the total number of values required, I think one should solve $S_k/\sqrt{k} < 0.1$ for $k$ where $S_k$ is the sample stadard deviation based on all values.
The problem is that only 16 values are given, and therefore it seems not so reasonable to use the sample standard deviation computed from them as an approximation for $S_k$. How should one proceed?
If $X$ is a member of the normal family and you are estimating $\mu$, then $Q = 15S_{16}^2/\sigma^2 \sim Chisq(15).$ Thus, $$P(Q > L) = P(\sigma^2 < 15 S_{16}^2/L) = 0.96,$$ where $L \approx 7.26$ cuts 5% of the probability from the lower tail of $Chisq(15).$
Then you have a pretty good (worst case) upper bound for $\sigma^2,$ and upon taking the square root, for $\sigma.$ Conservatively, you could use that value instead of $S_{16}$ in your formula for $k$.
Similar strategies would work for other distributional families.
However, my guess is that you are just supposed to assume $S_{16} \approx \sigma$ and forge ahead.
In practice, you can always do a reality check at the end of the simulation by using $2S_k/\sqrt{k}$ as an approximate 95% margin of simulation error for the estimate (where $S_k$ is the SD of the $k$ simulated values of the estimate). This works as long as the estimator is asymptotically normal and $k$ is reasonably large.