Stability of multiplication

467 Views Asked by At

Let $\alpha_i$ be a floating point number (say, single precision). Say I want to compute $\alpha = \alpha_1\alpha_2\alpha_3...\alpha_{n-1}\alpha_n$. Suppose I do the multiplication from left to right, and $x_c$ is the result of multiplication. Then I have

$\alpha_c = \alpha_1\alpha_2...\alpha_n(1+\delta_1)(1+\delta_2)...(1+\delta_{n-1})$ where $|\delta_i| \le u$, where $u = \frac{1}{2}\epsilon$. Then the notes said that we can show $\delta = (1+\delta_1)(1+\delta_2)...(1+\delta_{n-1}) \le 1.01nu$ if $1.01nu < 0.01$. And hence, the algorithm is numerically stable, but I never really understood what it really means, in a sense that the error obviously grows as $n$ grows. Why is it stable?