Ill illustrate my confusion with an example:
It can be shown, assuming $E_xE_y=0$ that the error in an arithmetical multiplication will be:
$E_{xy}=xE_y+yE_x+\mu$
Where $\mu$ is the so called 'round-off error'.
Now if I'm told to calculate the absolute error in the multiplication I would find it by calculating:
$\mid xE_y+yE_x+\mu\mid= \mid E_{xy} \mid$
But my teacher, just calculates absolute errors of all kind like
$\mid xE_y\mid+\mid yE_x\mid+\mid\mu\mid= \mid E_{xy} \mid$
And for some reason, this works fine. Namely, it gives the correct result of
$\mid(xy)_{real}-(xy)_{given by machine}\mid$
Which is what were interested in calculating.
Any ideas why this works? I don't think you can distribute abs value signs that way.