Precision loss from real arithmetic operations

43 Views Asked by At

I feel like this is a dumb question, but I'm looking for a reference for theoretical precision loss of real number operations such as addition, multiplication, division, and scalar multiplication. Google seems to mostly return stuff related to computer representations which is not quite what I want.

As an example: define $a'$ to be a $q$-approximation of a if $|a'-a| < 2^{-q}$. Suppose that $a'$ and $b'$ are $q$-approximations of $a,b \in \mathbb{R}$ respectively. Then $a' +b'$ is a $q-1$ approximation of $a+b$ since $$|(a+b) -(a'+b')| \leq |a-a'| + |b-b'| < 2\cdot 2^{-q} = 2^{-(q-1)}.$$

I realize these would be easy enough to prove, but a nice reference would give me some peace of mind, and a good learning reference as I consider more complicated algorithms. Thanks in advance!