In many algorithms, increasing the numerical precision, say from single to double, makes the numerical solution closer to the "true" solution (e.g. analytic solution if it exists) as long as there are no truncation errors on the algorithm or equation level.
I seem to remember, from my student days, a classic example in which increasing the numerical precision does not bring the numerical solution near the true solution. Of course, one could always concoct an example, but I am looking for one particular, well-known example often included in textbooks.
Edit: I realize my wording was not clear enough. Let me clarify with some notations. Suppose there is a root-finding problem,
$$ f(x) = 0 $$
where $f(x)$ is some polynomial and $x$ is a real number that we are searching for. We devise an algorithm to solve this problem (e.g. Newton-Raphson). Now suppose that the algorithm runs in single precision, i.e. all variables within the algorithm are single precision. We find our numerical solution $x_1$.
Now, we run the same algorithm in double precision, i.e. all variables are now in double, and designate the new solution $x_2$. We repeat with quadruple precision and call the new solution $x_4$. We repeat with arbitrarily high precision.
Now, we have a sequence of solutions, $x_1, x_2, x_4,...$. Suppose the polynomial is manually factorable, so we know an analytic "true" solution $\hat {x}$.
Usually, the following should hold true
$$ \lim_{n->\infty}x_n = \hat{x} $$
where $n$ is an integer (or powers of $2$ or whatever precision standards used).
Now, there are pathological cases, often cited by textbooks, that the above limit does not hold true. I couldn't quite recall or find the example myself. Thanks.
A typical situation occurs in the numerical solution of differential equations and in numerical differentiation by finite differences.
In the numerical integration, decreasing the step length and increasing the number of steps increases the precision of the solution up to a certain point where the accumulated noise of floating point truncations over the integration steps overwhelms the reducing discretization error.
In numerical differentiation a similar effect occurs for reasonably tame functions $f$ and relatively small $x$. Using the one-sided difference quotient $$f'(x)\approx\frac{f(x+h)-f(x)}{h},$$ as in the first panel below, one observes the best result with error $10^{-8}$ near $h=10^{-8}$, the square root of the double accuracy.
Using central differences $$f'(x)\approx\frac{f(x+h)-f(x-h)}{2h},$$ as in the second panel, the observed minimal error has a lower magnitude of $10^{-10}$, but the optimal step size shifts toward larger $h=10^{-5}$, the third root of the numerical accuracy.
The same effect, better accuracy in the result, but for increasingly large step size, happens if one employs the Richardson extrapolation scheme for the central differences $$ f'(x)\approx \frac13\left(4\frac{f(x+h)-f(x-h)}{2h}-\frac{f(x+2h)-f(x-2h)}{4h}\right) $$ like in the third panel. This increases the achievable accuracy to $10^{-12}$, but the best accuracy occurs for the still larger step size $h=10^{-3}$.