With some changes in notations and names, I'm often seeing my (engineering) students using the following technique to estimate uncertainty from measurements:
Suppose that $x_1,...,x_n$ are measured with uncertainties $\delta x_1,...,\delta x_n$, and these measured values are used to compute the function $f(x_1,...,x_n)$. If the uncertainties in $x_1,...,x_n$ are independent and random, then the uncertainty in $f$ is \begin{equation}\label{eq1} \delta f = \sqrt{ \left(\dfrac{\partial f}{\partial x_1}\delta x_1 \right)^2 + ... + \left(\dfrac{\partial f}{\partial x_n}\delta x_n \right)^2 } \end{equation}
Partial derivatives here are assumed on the point that's the actual measurement of what's being measured, and the $\delta x_i$ are taken accordingly to the uncertainty of the measurement.
I can see an obvious motivation here: using its degree one Taylor polynomial as an approximation to the function $f$ near one point of its domain $(a_1,...,a_n)$ (assuming differentiability): $$ f(x_1,...,x_n)\simeq \sum_{i=1}^n \dfrac{\partial f}{\partial x_i} (x_i-a_i) $$
Problem is: why using squares for the equation of $\delta f$? If you square the two sides of the first degree Taylor polynomial of $f$, then it would have cross-terms between partial derivatires with respect to different coordinates. Does the uncertainty formula means that these should be considered as zero when it assumes that the variables are "independent and random"? It seems to me that, even when these variables are "independent" (like, radius and height of a cilinder), these cross-terms product's uncertainties can affect the final uncertainty (volume of cilinder) more than the estimative would have us believe.
Working with the inequalities does not lead to the desired result.
Not sure if I was clear. The main reference here is John R. Taylor's "Introduction to Error Analysis", 1997. Check near equation 3.47 pg 75.
Curiously, the author "Taylor" does not mention the "Taylor" polynomial. So many lost opportunities here.