I have been recently reading about error propagation. Different options to do that exist. Among others, linearisation of the model by representing it as first order Taylor expansion, using Bayes theorem and Monte Carlo. I am interested learning more about the last method. It can be described in a nutshell as given the model $$ V=F(x_1, \dots, x_n), $$ generate a number of independent random realizations of the input vector $(x_1,\dots,x_n)$ feed those values to the model $F(x_1, \dots, x_n) $ and observe the mean and the variance of the outcome $V$.
Note that the model not need to be expressible in a closed analytical form. This might be for instance some code that calculates pressure drop in a heated rod bundle. In that case the governing equations are solved numerically and input variables, such as surface roughness are taken into account, as having some underlying uncertainty. This uncertainty might be propagated by running randomly sampled input values and observing the resulting spread of the output.
Unfortunately, I am a bit confused about some of the method's properties. Firstly, is it true that correlations between the input values are taken into account. i.e. the method can handle both correlated and independent input variables. I don't really understand that, since all the $x_n$ are always sampled independently. Is it really as simple as generating random values for the inputs $x_1, \dots, x_n$ and evaluating the function that represent our model $V=F(x_1, \dots, x_n)$?
Secondly, I don't understand how many times $V$ has to be evaluated, to generate reliable mean and variance. The only thing I came up with, is to use Chebyshev's inequality applied to the sample mean and estimate how many samples need to be drawn, to reach a certain probability that the modulus of the difference between the true mean and the sample mean is less than some fraction of the variance. I would be grateful if someone could shed some light on these two problems.
EDIT: In accordance with one of the comments, I would like to include a specific example $$ V=\int_s^b f(x) \left(1- \left ( \frac{s}{x} \right)^m \right) dx $$ Where $f(x)>0$ is real, differentiable function on the interval $[c,b] \in \mathbb{R}^+$, where $\mathbb{R}^+$ denotes the positive real line. The unknown variables $s$ and $m$ are normal distributed. For simplicity one could use $$ f(x)=\frac{1}{\sqrt{x}} $$ The general model tries to fit particle reaction rate, where by adjusting $s$ and $m$, the integral $V$ approaches (fits) some measured value. The question is how the uncertainty of $s$ and $m$ affects the value of $V$.