The context:
I'm running an simulation study. In this study, i am want to compare to estimation methods by evaluating the estimation accuracy. To do this, i take a set of parameters of the model, and generate 100 simulated datasets. To compare these methods, i using the Bias and the MSE and, as a measure of uncertainty, i calculate the SD of these two measurements. Until here, it's ok. However, as the set of parameters is to large to report the Bias and MSE of every single parameter, i am averaging the Bias and the MSE over all parameters, but i want to know what is the correct way to calculate the SD of the Average Bias and the Average MSE.
The problem in simplified math terms:
I have a vector of true parameters $\boldsymbol{\theta} = \left(\theta_1, \ldots, \theta_n\right)$,where $\theta_i \in (0,1)$, for all $i=1,\ldots,n$ (they are not iid). With this vector of parameters, its generated 100 simulated dataset $X_1, \ldots, X_{100}$ and, for each dataset, we obtain the estimate $\hat{\boldsymbol{\theta}}_{r} = \left(\hat{\theta}_{1r}, \ldots, \hat{\theta}_{nr}\right)$, $r=1,\ldots, 100$.
Now, we calculate $$\mathbf{b} = (b_{1},\ldots, b_{n}), \text{where } {b_i} =\widehat{Bias}(\hat{\theta_i}) = \hat{E}[\hat{\theta}_{i} - \theta_i] = \frac{1}{100}\sum_{r=1}^{100}(\hat{\theta}_{ir} - \theta_i), \text{for all } i=1,\ldots, n$$
$$\mathbf{s} = (s_{1},\ldots, s_{n}), \text{where } {s_i} =\widehat{MSE}(\hat{\theta_i}) = \hat{E}\left[(\hat{\theta}_{i} - \theta_i)^2\right] = \frac{1}{100}\sum_{r=1}^{100}(\hat{\theta}_{ir} - \theta_i)^2, \text{for all } i=1,\ldots, n$$
After that, we obtain the averages $\overline{b} = \frac{1}{n}\sum_{i=1}^{n} b_i$ and $\overline{m} = \frac{1}{n}\sum_{i=1}^{n} m_n$.
The question is: what is the correct way to calculate the standard deviation of $\overline{b}$ and $\overline{m}$ to represent the uncertainty of these two measures?