I've been searching the internet for a few weeks and picking the brains of colleagues in person without success. As to what happens to error propagating down a derivative when the error in a measurement is known. (Hopefully I'm on the right SE, Physics was also tempting for the question)
In my work, I have a monotonically decreasing function $f(x,y)$ where $x(t), y(t)$ are functions of time which also decrease with time. I know the error in each $x$ and $y$ to be $\delta x, \delta y$, and then the error in $f$ is $\delta f = \sqrt{(\frac{df}{dx})^2 \delta x^2 + (\frac{df}{dy})^2 \delta y^2}$.
However, $\frac{df}{dt}$, has to have some error associated with it, that corresponds intuitively with the error in the original measurements $\delta x, \delta y \rightarrow \delta(\frac{df}{dt})$. The error in the derivative wrt of $f$ is desired, $\delta(\frac{df}{dt})=$?
First attempts to figure out what $\delta(\frac{df}{dt})$ are have not been successful. I always end up stuck with an unknown error of some other time derivative by standard error propagation rules. I've also tried taking the derivative wrt of $\delta f$, but this value is not monotonic, so it's derivative is not always positive and I would reason that my error (essentially a standard deviation) should be a positive number.
Here's some example data that represents the issue at hand of $x(t), y(t), f(x(t),y(t))$. Here $f=\frac{\pi}{6} (3 y^2 x + x^3)$ and the bands represent the error in the measurements. The error in $y$ is constant, while the error in $x$ varies with time, and the influences of these on $f$ give an overall narrowing error band.
