How is $f(x)=x+1$ not backwards stable if I consider the error propagated in the addition?

1.7k Views Asked by At

Many sources claim that $f(x)=x+1$ is not backwards stable. That is, it does not give an exact solution to a slightly perturbed (or "nearby") problem.

e.g. https://www.cs.usask.ca/~spiteri/CMPT898/notes/numericalStability.pdf on page 24.

Now, when I work this out myself, I think I'm able to show that $\exists\,\,\, \epsilon $ s.t. the computed problem with errors of $\epsilon_1, \epsilon_2,$ and $\epsilon_3 $ is equal to a slightly perturbed problem. Since there is an error for rounding on each input, and then an error for the addition, the computed solution is $(x(1+\epsilon_1)+1(1+\epsilon_2))(1+\epsilon_3)$, and the exact solution slightly perturbed is $(x+1)(1+\epsilon)$. So i am showingthese are equal for some epsilon on the order of machine precision:

$(x(1+\epsilon_1)+1(1+\epsilon_2))(1+\epsilon_3)=(x+1)(1+\epsilon)$

multiplying these out we get:

$x+\epsilon_1x+1+\epsilon_2+\epsilon_2x+\epsilon_3\epsilon_1x+\epsilon_3+\epsilon_3\epsilon_2=x+1+\epsilon x+\epsilon$

subtract $x$ and $1$ from both sides

$\epsilon_1x+\epsilon_2+\epsilon_2x+\epsilon_3\epsilon_1x+\epsilon_3+\epsilon_3\epsilon_2=\epsilon x+\epsilon$

Now, it seems clear to me that we cna always find an epsilon on the right hand side that will complete the equation.

Most sources cite $x=0$ as the value that breaks this condition, but if I set $x=0$ I am still able to see an $\epsilon$ that mkaes this work.

What am i missing here?

2

There are 2 best solutions below

3
On

In this case $\tilde{f}(x) = \mathrm{fl}(x) \oplus 1$. Now there is a bound $c>0$ such that if $0<\mathrm{fl}(x)< c$ then $\mathrm{fl}(x)\oplus 1=1$. So in this case $\tilde{f}(x) = f(\tilde{x})$ implies $\tilde{x}=0$. However, $0$ is too far from $\mathrm{fl}(x)$ to satisfy the backward stability requirement.

5
On

The fundamental problem is that the domain is not clearly stated. We have two functions which are relevant in this context. Let $\mathcal F$ denote our set of floating point numbers. Then the relevant functions are $$ f : \mathcal F \times \mathcal F \rightarrow \mathbb R, \quad f(x,y) = x +y$$ and $$ g : \mathcal F \to \mathbb R, \quad g(x) = 1 + x.$$ This two function are closely related, yet decidedly different as evidenced by their different domains.

Let $\hat{f}$ and $\hat{g}$ denote the computed value of $f$ and $g$. In the absence of floating point exceptions, we have $$ \hat{f} = (x+y)(1+\delta), \quad |\delta| \leq u,$$ where $u$ is the unit roundoff. This can also be expressed as $$\hat{f} = f(\bar{x},\bar{y}), \quad \bar{x} = x(1+\delta), \quad \bar{y} = y(1+\delta).$$ We conclude that $f$ is backward stable. In contrast, we have $$ \hat{g} = (1 + x)(1+ \nu), \quad |\nu| \leq u.$$ It is clear, that $$ \hat{g} = 1 + \nu + x(1 + \nu) = 1 + z, \quad z = \nu + x(1+\nu) = x\left( 1 + \left[\frac{\nu}{x} + \nu \right] \right) .$$ Now while $z$ is a good approximation of $x$ in the absolute sense, the relative error is large, when $x$ is tiny. This is the point were we discover that backwards stability has been lost.