For $f(x)=(1-\cos(x))/x^2$,
(a) Analytically evaluate $\lim_{x→0} f(x) = L$.
(b) As $x→0$, at what rate does $f(x)→L$?
(c) Suppose that we are able to represent floating point numbers with $N$ decimal digits of accuracy. Around what value of $r$ will the evaluation of $f(x)$ produce very large relative error when $|x| < r$?
(d) Create your own algorithm or expression for computing $f(x)$ for very small values of $x$. Verify the success of your algorithm computationally.
I know $f(x)$ goes to $1/2$ as $x$ approaches to zero, but i really have no idea how to deal with part b,c and d.
(b) We have \begin{equation} \cos(x) = 1 - \frac{x^2}{2} + \frac{x^4}{24} + O(x^6) \end{equation} from which it follows that \begin{equation} \frac{1-\cos(x)}{x^2} = \frac{1}{2} - \frac{x^2}{24} + O(x^4). \end{equation} Therefore \begin{equation} f(x) - \frac{1}{2} = O(x^2). \end{equation} (c) When $x \rightarrow 0$, $x \not = 0$ the expression $1 - \cos(x)$ will cancel catastrophically. In general, a subtraction $d = a - b$ between two real numbers $a$ and $b$ will cancel catastropically when $a \approx b$. Specifically, if $\hat{a} = a(1+\theta_a)$ and $\hat{b} = b(1+\theta_b)$ are the computed approximations for numbers $a$ and $b$, then the difference $\hat{d} = \hat{a} - \hat{b}$ satisfies \begin{equation} \frac{d - \hat{d}}{d} = \frac{b \theta_b - a \theta_a}{a - b} \end{equation} from which it follows that \begin{equation} \left | \frac{d - \hat{d}}{d} \right | \leq \frac{|a| + |b|}{|a-b|} \max \{|\theta_a, |\theta_b| \}. \end{equation} We see that if $a \approx b$, then the right hand side is not necessarily small. This does not preclude that computed difference $\hat{d}$ will occasionally approximate $d$ with a small relative error, but in general this is not the case and the subtraction will produce garbage. On the other hand, if say $|a| > 2 |b|$, then the subtraction is completely safe, as \begin{equation} \frac{|a| + |b|}{|a-b|} \leq \frac{|a| + \frac{1}{2}|a|}{\frac{1}{2}|a|} \leq 3 \end{equation} which implies that relative error in the subtraction is only a modest (3) multiple of the largest relative error associated with $\hat{a}$ and $\hat{b}$. Returning to the specific problem at hand, then the subtraction $1 - \cos(x)$ is completely safe as long as $1 > 2 \cos(x)$, i.e. as long as
\begin{equation} |x| \geq \frac{\pi}{3}. \end{equation} Inside the interval $I = \left(-\frac{\pi}{3},-\frac{\pi}{3} \right)$ the original expression becomes increasingly useless as $x$ tends to zero. You can therefore defend the choice $r = \frac{\pi}{3}$ as the turning point, regardless of the architecture used.
Now, suppose that we are able to represent floating point numbers with $N$ decimal digits of accuracy. Around what value of $r$ will the evaluation of $f$ produce very large relative errors?
Since \begin{equation} \cos(x) = 1 - \frac{1}{2}x^2 + O(x^4) \end{equation} then we can say with certainty that the floating point representation of $\cos(x)$ satisfies \begin{equation} \text{fl}(\cos(x)) = 1, \end{equation} when \begin{equation} \frac{1}{2} x^2 \leq \frac{1}{2} u, \end{equation} where $u$ is the unit round off error on the machine. It follows that any machine should return $\cos(x) = 1$ when $|x| \leq \sqrt{u}$, resulting in a relative error of $1$ for the naive evaluation of $f$. It is possible to be more precise, but then the original question should have included information about the quality of the machine implementation of $\cos$ and a threshold tolerance should have been specified. My best guess is the original author of the problem operated with $u = \frac{1}{2} 10^{-N}$.
(d) In order to evaluate $f(x)$ for small values of $x$ we require an equivalent expression which is reliable. Suppose we desire a relative error which is less than a user specified tolerance $\tau$ inside of the problematic interval $I$. Then we proceed as follows. Let $p_n$ denote the Taylor polynomial or order $n$ for $g(x) = \cos(x)$ at the point $x_0 - 0$. Then \begin{equation} p_{2n}(x) = \sum_{j=0}^n (-1)^{2j} \frac{x^{2j}}{(2j)!} \end{equation} and also $p_{2n} = p_{2n+1}$. Then by Taylor's theorem \begin{equation} g(x) - p_{2n}(x) = \frac{g^{2n+2}(\xi)}{(2n+2)!} x^{2n+2} \end{equation} for at least on $\xi$ between $x$ and $0$. It follows, that \begin{equation} \left| f(x) - \frac{1 - p_{2n}(x)}{x^2} \right| \leq \frac{1}{(2n+2)!} |x|^{2n} \leq \frac{1}{(2n+2)!} \left| \frac{\pi}{3} \right|^{2n}. \end{equation} It is now straight forward to select $n$ so that we have a small absolute error. A small relative error can be secured by observing that the target value, i.e $f(x)$ satisfies $|f(x)| \geq \frac{1}{4}$ for all $x \in I$. The relative error satisfies \begin{equation} \left| \frac{f(x) - \frac{1 - p_{2n}(x)}{x^2}}{f(x)} \right| \leq \frac{4}{(2n+2)!} \left( \frac{\pi}{3} \right)^{2n} \leq \tau \end{equation} provide that $n$ is chosen sufficiently large. If $\tau = 2^{-53}$ is the double precision round off error, then $n=9$ will suffice, but extra care is required during the evaluation of the polynomial as we are operating at the boundaries of machine precision. If $\tau \approx 100u$, then it is certainly safe to apply Horner's method to evaluate the polynomial.