Are the intriguing and lovely Mandelbrot Set hoops and curls the result of floating point computation inaccuracy?

1.1k Views Asked by At

Are the intriguing and lovely Mandelbrot Set hoops and curls the result of floating point computation inaccuracy?

I have written various Mandelbot Set computer implementations, such as a dynamic zoom and playback. Some use fixed point arithmetic, others use the FPU.

I have seen this question which suggests that every bud is a mathematically smooth shape, with smaller buds around.

Are the parades of sea-horse shapes and so on, a side effect of the limitations of computer floating point arithmetic, and not the actual Mandelbrot Set?

3

There are 3 best solutions below

3
On

No, a properly written program will make sure the display is not driven by floating point errors. As the size of the numbers in the set never exceeds $2$ you can do all the computations in fixed point if you want. It is usual not to be limited to the word size of your computer but to use some extended precision arithmetic implementation.

One simple way to verify this is to look at the figures from many different programs. If you take the coordinates of a a particular figure and type it into different programs you get the same picture, despite the fact that the calculation techniques are different. I have seen programs that did run out of precision. The pictures do not look like what you are used to in Mandelbrots at all.

2
On

No. The Mandelbrot set is an example of a complex dynamical system which has chaotic behaviour, i.e. the long-term behaviour of the orbit of a point can be very sensitive to the initial conditions. This sensitivity to initial conditions means that two points that are close together in the complex plane may follow radically different trajectories under iteration. This ultimately leads to the fractal nature of the (boundary of) the Mandelbrot set. This is not the result of numerical error, but a real mathematical phenomenon.

However, I don't think that the Mandelbrot set is a very good first example of this phenomenon, as there are several layers of complexity that we might want to discuss before getting there. Instead, let me try to convince you by considering Newton's method for finding the roots of a particular polynomial.

Newton in a Nutshell: Let $f$ be a polynomial in $\mathbb{C}$. Fix some $x_0\in\mathbb{C}$, and define $$z_{n+1} := f'(z_n)(z-z_n) + f(z_n). $$ For most values of $z_0$, the limit $z_{\infty} := \lim_{n\to\infty} z_n$ exists. Moreover, when $z_{\infty}$ exists, we have $f(z_{\infty}) = 0$.

Newton's method is a way of numerically approximating the roots of sufficiently smooth functions (though I'm restricting our attention to only polynomials here). For an example of how the method works, I've put together a little Desmos demonstration for finding a root of the polynomial $f(z) = z(z+1)(z-1)$. By dragging the "Initial Guess" around, you can see how the first couple of iterations begin to approximate roots.

enter image description here

However, note that there are a couple of caveats to this:

  1. For most values of $z_0$, the limit exists. However, there are circumstances under which the limit does not exist (for example, if $f'(z_n) = 0$ for some $n$). So one reasonable first question is "For which values of $z_0$ does the limit exist?" In playing with the Desmos demo, you might think about what happens when you choose initial guesses that are near $\pm0.5$.
  2. While $z_{\infty}$ is a zero of $f$, it is not always possible to tell which zero it will be. In our example of $z \mapsto z(z+1)(z-1)$, we know that there are three zeros ($z \in \{0, \pm 1\}$) but, given an initial guess we don't know which zero will be the limit point (assuming that the limit exists). Hence it is also reasonable to ask "Given an initial point $z_0$, what is the limit point $z_{\infty}$?"

One way to get at this is to attempt to graph the iterations. To do this, start with the complex plane:

enter image description here

Here, points in the complex plane are assigned a color. Black points are near zero, red points are along the positive real axis, cyan points are along the negative real axis, and so on. To "graph" a function on the complex plane, pick a point, figure out where the function sends that point, and color the original point with the corresponding color from the limit point. For example, if we apply one iteration of Newton's method to the complex plane (for the function $f(z) = z(z+1)(z-1)$), we obtain

enter image description here

Note that points near zero remain near zero, and that numbers of large magnitude don't seem to move very far, but that something interesting is happening near the origin. After 100 more iterations, we get this picture:

enter image description here

There are basically three colors in this picture: black (corresponding to the point $0$), red (corresponding to the point $1$), and cyan (corresponding to the point $-1$). This picture shows us where a given initial value $z_0$ is sent by Newton's method. Nearly every point in the complex plane is sent to one of the three roots, but observe that there is some chaotic behaviour near the boundary of each basin of attraction (the red region is the basin of attraction for the root $z=1$, as every point in the red region is "attracted to" this root by Newton't method)---there a little blobs of different colors running along these boundaries.

If you played with the Desmos demo, you might have noticed that if you pick an initial guess near $\pm0.5$, the subsequent guesses "bounce around" a lot before they start to settle in towards one of the roots. This is exactly the chaotic behaviour that leads to the little bulbs in the last image above. This detail is the result of a real phenomenon (which you can readily see for yourself by fiddling with the demonstration), and is not due to numerical errors introduced by the computer.

2
On

I'm not absolutely sure, but I think the shadowing lemma applies to the Mandelbrot set, meaning that near any rounded-off floating point trajectory there is a true trajectory that started nearby, i.e. the backward error is small:

The main causes of error are round-off error and truncation error. The forward error of the algorithm is the difference between the result and the solution; in this case, Δy = y* − y. The backward error is the smallest Δx such that f(x + Δx) = y*; in other words, the backward error tells us what problem the algorithm actually solved.

(quote from https://en.wikipedia.org/wiki/Numerical_stability )

If this is the case, it should be possible to find the floating point precision required to make the backward error as small as desired (e.g. a fraction of pixel spacing), or at least check whether your given precision was enough to meet your backward error target, using derivatives. Checking the rounding errors of the derivatives themselves adds some complication, but perhaps not insurmountable.

See also: