The terms
$a(x)=\dfrac{1-x}{1+2x}-\dfrac{1-2x}{1+x}$ and $b(x)=\dfrac{3x^2}{(1+2x)(1+x)}$ are for $x>0$ the same function $f(x)$.
Calculate the condition of $f$ for $0<|x|\ll 1$.
How would one preceed when evaluating $f(x)$ for $0<|x|\ll 1$ to garantuee good numerical stability?
I want to calculate the condition of $f$. I would do so by using the formula
$\displaystyle{\left|\frac{f'(x)x}{f(x)}\right|}=\left|\frac{3x+2}{1+3x+2x^2}\right|$
For the condition of $f$, we then get $\operatorname{cond}(f)(0)=2$ and $\operatorname{cond}(f)(1)=\frac56$. So a bad condition around 0 and a good condition around 1.
Question: Is this correct, because I do not see how the different expressions of $f$ can be used here, as there is no real change, when applying the formula. So I think this does not matter here.
From the lecture notes I do not see why it should be justified to use the formula for the condition on each summand $\frac{1-x}{1+2x}$ and $-\frac{1-2x}{1+x}$ seperatly and then "add" the results.
With other words: How can a different expression help?
If I do this however I would get that for the term $\frac{1-x}{1+2x}$ there is a good condition around $0$ and a terrible condition around $1$ as the term tends to $\infty$ for $x\to 1$.
For the other term $-\frac{1-2x}{1+x}$ I would also get a good condition around $0$ and a condition of 3/2 around $1$.
So I do not see how this would help here.
Can you comment on the topic of condition and stability, as I find this kind off unclear in my lecture notes. Thanks in advance.
The condition number by itself does not establish the numerical stability. When you have different algorithms for computing $f(x)$, in this case $a(x)$ and $b(x)$ suggest two equivalent ways of computing $f(x)$, they cannot be distinguished based on the condition number (which will be the same) but rather by the way they propagate roundoff errors introduced at each step of the algorithm. In each case you can write the relative error of the final result as $$ \varepsilon_f = \textrm{cond}_f(x) \varepsilon_x + \sum_{i=1}^n Q_i(x) \varepsilon_i $$
and you say the the algorithm is numerically stable near some $x$ if both the condition number and the coefficients $Q_i$ are bounded. So, in the case of $a(x)$, you would have, for instance $$ z_1 = \frac{1-x}{1+2x}, \quad z_2 = \frac{1-2x}{1+x}, \quad z_3 = z_1-z_2 $$ and so, \begin{align*} \varepsilon_{z_1} = & -\frac{3 x}{-2 x^2+x+1}\varepsilon_x + \varepsilon_1\\ \varepsilon_{z_2} = & \frac{1}{2 x-1} \varepsilon_x + \varepsilon_2\\ \varepsilon_{z_3} = & \frac{z_1\cdot 1}{z_1-z_2}\varepsilon_{z_1}+\frac{z_1\cdot (-1)}{z_1-z_2}\varepsilon_{z_2} + \varepsilon_{3} \end{align*}
If you substitute $\varepsilon_{z_1}, \varepsilon_{z_2}$ in the last expression, you can observe the behaviour of coeficients for different ranges of $x$.
If you repeat the procedure for $b(x)$, you'll be able to compare.