I'm studying M. Barnsley's book 'Fractals Everywhere', but I'm stuck in the chapter 'Fractal Dimension'.
Suppose $(X, d)$ is a complete metric space and let $A \in \mathcal{H}(X)$ be a nonempty compact subset of $X$. Write $\mathcal{N}(A, \varepsilon)$ for the smallest amount of closed balls with radius $\varepsilon$ needed to cover $A$. Barnsley states the following:
The intuitive idea behind fractal dimension is that the set $A$ has fractal dimension $D$ if $\mathcal{N}(A, \varepsilon) \approx C \varepsilon^{-D}$ for some positive constant $C$, where $f(x) \approx g(x)$ if $\lim_{\varepsilon \to 0} \frac{\ln(f(x))}{\ln(g(x))}=1$.
I don't understand the intuition behind this definition. Can you explain this a little bit better?
(This is more for future reader's than for you)
This is just a really fancy formula for counting how many boxes, squares, can cover a particular fractal. An example will clarify everything...
Take the humble square. It has an area of $L^D$ where $L$ is the length of a side and D is the dimensionality of the square, which is 2. Now if you want to find out how many of these boxes are needed to cover a square, you need to know additional information. What's the side length of one of these smaller squares? Well denote its side length by $\epsilon$. Now you just divide the area of the square by the area of a box. This gives the number of small boxes needed to cover the square. $$N={{L^D} \over {{\epsilon}^D}}$$ But we want to change $\epsilon$ not the area of the square, so just denote the area by a constant $C$. $$N=C \cdot {\epsilon}^{-D}$$ take the logarithm of both sides $$\ln(N)=\ln(C \cdot {\epsilon}^{-D})$$ apply the power rule and multiplication rule $$\ln(N)=\ln(C)+D \cdot \ln({S})$$ where $S={1 \over {\epsilon}}$ $$\ln(N)-\ln(C)=D \cdot \ln(S)$$ $$D={{\ln(N)-\ln(C)} \over {\ln(S)}}$$ substitute for the case of the square and see that it works, then move on to fractals like the Sierpinski triangle and Koch snowflake.