Let's say I have a dataset that is 100 elements long,
$X = \{x_0...x_{100}\}$
and I do 1,000 Monte Carlo realizations of the data, $X_j, 0\leq j \leq1000$, sampling 10 points each time.
If I then compute the variance (or any other estimator) on those ten points, and do that for every realization, I can then use that to tell me about the variance of the whole dataset.
$Var(X) \approx \dfrac{\sum_{j=0}^{1000}Var(X_j)}{1000}$
So, why would I do this? If I take those 1,000 variances, and average them, shouldn't it just converge to the variance of the whole set, if I were to measure it once? Is the only reason to do it this way to be able to measure the error on the computed variance?