I earlier asked this question Error propagation, why use variences?. And am now slightly confused about the link between error propagation and total diffentials. As mentioned in the linked question, if we have a random verbal given by $c=f(x,y)$ then the worst case error in $c$ is given by: $$\sigma_c = f_x \sigma_x + f_y \sigma_y$$
Let this be equation (1) (where the partial deriviaves are evaluated at the mean values of $x$ and $y$)
By worst case I assume that we mean there is a total correlation between $x$ and $y$ i.e. the error in $x$ is proportional to the error in $y$ and of the same sign. Firstly is this correct and do we need to take the modulus of the partial deriviatives to ensure that $\sigma c$ is always positive ? And secondly with total diffentials we can use the formula:
$$\Delta c \approx f_x \Delta x + f_y \Delta y$$ Let this be equation (2).
There is obviously a link between equation (1) and (2) but I cannot see it totally. Here is where I am confused:
- The value of $\sigma _x$ can only take positive values whilst that of $\Delta x$ can be both positive and negative.
- I have always thought equation (2) holds no matter what the correlation between $x$ and $y$ is (i.e. they could be independent or exactly the same, and equation 2 would still hold). This does not seem to be the case with equation (1) which seems to require complete correlation between the two variables.
So what is the relationship between these two equations?