This image on Wikipedia, originally taken from this Math.SE answer, shows $(x - 2)^9$ in expanded and factored form, plotted with a computer using floating point arithemetic. 
The image shows the dramatic loss of significance due to cancellation in the expanded expression, $x^{9} - 18 x^{8} + 144 x^{7} - 672 x^{6} + 2016 x^{5} - 4032 x^{4} + 5376 x^{3} - 4608 x^{2} + 2304 x - 512$.
The thing that interests me, however, is that in this plot, the exact error at any given point appears to be random/fractal in nature. There also appears to be a gap near the right side of the graph, which I am curious about. I made my own version of this plot using numpy and matplotlib, and saw no such gap:
Here is the same plot zoomed in a few times in a random location, until you can see the individual points, so you can see apparent fractal nature:
Here is the code for the plot, if anyone is interested
from sympy import symbols, lambdify, expand
import matplotlib.pyplot as plt
import numpy as np
x = symbols('x')
a = lambdify(x, (x - 2)**9, 'numpy')
a = lambdify(x, expand((x - 2)**9), 'numpy')
xvals = xvals = np.linspace(2-0.0000005,2.0000005, 16000)
plt.plot(xvals, a(xvals), linewidth=0.1)
plt.plot(xvals, b(xvals), linewidth=0.1)
So my question is, is there a mathematical explanation for the apparent chaotic/fractal nature of the loss of significance due to cancellation.
EDIT
Here is the same plot for $(x - 3)^9$
and here is the same plot for $(x - 4)^9$
These seem to confirm H. H. Rugh's idea that the increase in the maximum magnitude after 2 in the first plot is due to passing a power of 2 and needing an extra bit to represent the number, as the first plot has roughly uniform magnitude and the second has a similar increase for values greater than 4.




More like thoughts than rigour: Interesting plots! I believe that the plots correspond to weighted sums of 9 (or 10?) random variables (probably close to independent but not quite). In each term, say $-18 x^8$ you get en error in the result which is like a random variable in the last binary digit. Since $x$ doesn't vary much it's fairly homogeneous. But in the high powers the error varies more rapidly than in low powers so there are probably different scales in the variation (whence the 'fractal' structure).
However, your first plot shows a curious phenomena when you get above 2. The variance seems to be multiplied by roughly 2 (my guess) which may be related to the fact that in binary when above 2 you need one more digit to represent the number $x$ (and its powers) so you loose one digit of precision in the end.