Floating Point and machine error

256 Views Asked by At

Let $a$ and $b$ be to real arbitrary real numbers, show that the relative error that you made by computing $a^2b$ with floating point arithmethic is bound to $5\epsilon + O(\epsilon^2)$, with $\epsilon$ the machine error.

As $a$ and $b$ are real numbers I can write them as:

$a = a(1 + \delta_1)$

$b = b(1 + \delta_2)$

With $\delta_1 \leq \epsilon$, $\delta_2 \leq \epsilon$. To get the bound i wan't to use the relative error:

$\mid \frac{x-fl(x)}{x}\mid$

so $fl(a^2) = a^2(1+\delta_1)^2$, and $fl(a^2b) = a^2b (1+\delta_1)^2(1+\delta_2)$. But this does not give me the $5\epsilon$ that i desire, it seems to be a problem with the float operation but I can't grasp it. Any idea?

1

There are 1 best solutions below

0
On

You need to consider the errors incurred by the floating point multiplication, this gives two additional correction factors, leading to the total of 5 that gives the factor in the estimate.