Numerical error analysis

211 Views Asked by At

I stand before the following task and I do not know how to solve it.

The input parameter $$a=10^6, b=10^6 + 10^{-2}$$ will be round internally to $$a^*, b^*$$ with $$a=a^*(1 + \epsilon_1)$$ $$b=b^*(1 + \epsilon_2)$$ where $$|\epsilon_1|,|\epsilon_2|\ll 1$$ Determine through error analysis of first order (quadratic terms in \epsilon will be neglected) an estimation of $$\epsilon_1,\epsilon_2$$ such that $$z=\frac{1}{a-b}$$ and $$z^*=\frac{1}{a^*-b^*}$$ satisfy $$z=z^* (1+\epsilon_*)$$ with $$|\epsilon_*|\le 10^{-7}$$

Well what does it exactly mean that a number is round to $$a=a^*(1 + \epsilon)$$ I guess $$\epsilon$$ is the smallest possible steps between numbers that can be realised by the computer.

1

There are 1 best solutions below

0
On

We know $$z=\frac{1}{a-b},\qquad z^* = \frac{1}{a\,(1+\epsilon_a) - b\,(1+\epsilon_b)},$$ Consider $z/z^*$ and we get, $$\frac{z}{z^*} = (1+\epsilon_z) = \frac{a\,(1+\epsilon_a) - b\,(1+\epsilon_b)}{a-b},$$ Which gives us $$\epsilon_z = \frac{a\,\epsilon_a - b\,\epsilon_b}{a-b}$$ If we let $a = c + d/2$ and $b=c-d/2$ then we can see what the error looks like given the difference and we get $$\epsilon_z = \frac{2c(\epsilon_a - \epsilon_b) - d(\epsilon_a+\epsilon_b)}{2d}.$$ You can then plug this into your inequality for $\epsilon_z$. What can we see about the error given this form? As the difference between the two numbers approaches zero we get $$\lim_{|d|\rightarrow 0} \epsilon_z = \begin{cases}\infty & \mathrm{if }\,\,\epsilon_a \neq \epsilon_b\\0 &\mathrm{if }\,\,\epsilon_a = \epsilon_b\end{cases}$$ I.e., for a small difference the same relative error in $a$ and $b$ leads to roughly the same absolute error, which cancels out due to the differing signs. As the difference gets bigger however we have $$\lim_{|d|\rightarrow \infty} \epsilon_z = -\frac{1}{2}(\epsilon_a + \epsilon_b)$$ This term will not exactly cancel because the same magnitude of fractional error will give a large absolute difference in error due to the different sizes of $a$ and $b$. Similarly if $a^*$ is too large and $b^*$ is too small, due to the sign difference this acts to compound the error due to the minus sign infront of $b$. Additionally we can see there is a minus sign in front of $\epsilon_z$ in this limit, this results because $z^*$ involves taking a recprocal and so having the denominator too small leads to the numerator being too large and vice-versa.

Of course, this whole discussion is predecated on the fact we assume we can calculate $z^*$ exactly given the rounded $a^*$ and $b^*$. in reality there will also be rounding error inside the computer for storing $z^*$.

Edit: I just noticed I used the errors defined the wrong way for $a$ and $b$, i.e. I used $a^* = a(1+\epsilon_a)$ not $a=a^*(1+\epsilon)$ but this does not matter much since you specified that quadratic error terms are to be ignored we can write $$a^* = \frac{a}{1+\epsilon} = a(1 - \epsilon + O(\epsilon^2))$$ and we can easily absorb the sign change in our defintion of the error.