The standard intuitive explanation for topological entropy is that it measures the exponential growth rate of the number of distinguishable orbits. I'm not quite sure why this is the case, any thoughts? Thanks.
Edit: The definition I'm using is for a continuous transformation T and an open cover $\alpha$ we define $h(T,\alpha)= \frac{1}{n} lim_{n \to \infty}H(\bigvee_{i=0}^{n-1}T^{-i}\alpha)$. Where $\vee$ is the adjoin (intersection) of all the covers and $H$ is the minimal covering of the adjoin. Take the entropy to be the supremum over all the $\alpha$.
There are certainly many definitions of topological entropy that in many situations are equivalent (even forgetting noncompact sets, higher rank actions or even some classes of discontinuous maps).
What happens is that for a continuous map on a compact metrizable space $(X,d)$ the original notion by Adler, Konheim, and McAndrew is equivalent to a notion introduced independently by Bowen and by Dinaburg that answers to your question.
Namely, letting $$ d_n(x,y)=\max \bigl\{d(f^k (x), f^k(y)) : 0 \le k \le n-1 \bigr\}, $$ the topological entropy of a continuous map $T\colon X\to X$ on a compact metric space is equal to $$\tag1 h(f)=\lim_{\varepsilon \to 0} \limsup_{n \to \infty} \frac 1 n \log N(n,\varepsilon), $$ where $N(n,\varepsilon)$ is the largest number of points $p_1,\ldots,p_m \in X$ such that $$\tag2 d_n(p_i,p_j) \ge \varepsilon \quad \text{for} \ i \ne j. $$
One usually interprets $(1)$ as the exponential growth rate (that is, the limit on $n$) and $(2)$ as counting the number of distinguishable orbits at least at a distance $\varepsilon$ (as if your vision had only precision $\varepsilon$).