Iterated exponentiation is defined by
$$x \mapsto x^{x^{x^{\cdot^{\cdot^{\cdot}}}}}$$
or more conveniently, we denote by $^rx$ the expression $\underbrace{x^{x^{\cdot^{\cdot^{\cdot^{x}}}}}}_{r \text{ times}}$. Let us define the function
$$\begin{array}{cccc}
f_x:& \mathbb{N}& \to & \mathbb{R}\\
& r & \mapsto & ^rx.
\end{array} $$
Euler proved that $\lim_{r \to \infty} f_x(r)$ exists for real numbers $x \in [e^{-e}, e^{1/e}]$ ([1]) (there is a convergence result in $\mathbb{C}$, but I am only interested in real numbers here).
Now we consider the Lambert $W$ function, which is defined to be the function satisfying
$$W(z)e^{W(z)} = z.$$
Then we know from [1] that when $f_x$ converges, it converges to
$$\lim_{r \to \infty} f_x(r) = \frac{W(-\ln(x))}{-\ln(x)}.$$
My question is: Given $\epsilon > 0$, how can we determine $r_0 \in \mathbb{N}$ such that $\left|f_x(r_0) - \frac{W(-\ln(x))}{-\ln(x)} \right| < \epsilon$? Is it possible to solve this algebraically, or is it only possible to do it numerically?
In principle, for this can the Abel-function or the Schröder-function be used. (In the following I change your notation to mine: $f_b(x)=b^x $ for the iterable base-function, $f^{oh}_b(x)$ for the $h$-times iterated function itself ($h$ is often called the "iteration-height).
The Schröder-function, say $\sigma(x)$, provides somehow an analogon to the $\operatorname{slog}(x)$ function, where the $\operatorname{slog}(x)$ gives directly the required iteration height to arrive from $x_0=1$ at $x_h$ by $f_b^{oh}(x_0) = x_h$ and so simply interpretes some given value $x$ as h'th iterate away from $x_0=1$, in short : interpretes any (suitable) given $x$ as $x_h$ and allows to compute the required $h$.
The Schröder function gives a similar model, it allows the construction $$ f_b^{oh}(x_0) = \sigma^{-1}(c^h \sigma(x_0) = x_h \tag 1$$ where $c$ is some constant depending on $b$. Let's denote the fixpoint, which is the limit $x_\infty$ by $\omega$ and $x_0$ and the $\varepsilon$ larger than $\omega$, then the/your question is:
$\qquad \qquad $ for which $h$ is $ \qquad f_b^{oh}(x_0) = x_h = \omega + \varepsilon \qquad \qquad$ ?
The Schröder-function requires now, that the involved power series are recentered around the fixpoint $\omega$. With that measure this can be computed by $$\begin{array} {rll} \sigma^{-1}(c^h \cdot \sigma(x_0-\omega) ) +\omega &=& x_h \\ c^h \cdot \sigma(x_0-\omega) &=& \sigma(x_h-\omega) \\ c^h &=& w= \sigma(x_h-\omega) / \sigma(x_0-\omega) \\ h &=& \log_c(w) \end{array}$$ Unfortunately, the $\sigma(x)$ and $\sigma(x-\omega)$ are not known to be expressible in simple forms, and also the coefficients of their powerseries have no simple general form, so not only the result of the powerseries $\sigma()$ is an approximation but even its coefficients are (in general). The Schröder-function is usually estimated by a limit expression (which I shall not show here).
In the tetration-forum there are a multitude of discussions how to compute the Abel function (representing the $\operatorname{slog}()$-concept) or the Schröder- (also denoted as Koenig's) function; an -in my view- conceptionally simple method for the construction/approximation of the power series for the Schröder-function is to use finite $n\times n$-"Carleman"-matrices where $n$ can be increased to get better approximations.
Carleman-matrices for some function $f(x)$ contain basically the coefficients of the power series of that $f(x)$ and as well of all its powers: $f(x)^c$ where $c$ as power for the $f(x)$ is here identical with the column-index. Denoting an infinite "Vandermonde"-vector $V(x)$ for $V(x) = \operatorname{rowvector}[1,x,x^2,x^3,x^4,...]$ we'll have with the Carlemanmatrix $F$ for the function $f(x)$ the form $$ V(x) \cdot F = V(f(x)) \\ V(f(x)) \cdot F = V(f^{o2}(x)) \\ \vdots \\ V(x) \cdot F^h =V(f^{oh}(x)) \tag 2$$ and fractional iteration (for the convergent cases) is then approximable using the diagonalization to a diagonal matrix $D$ $$ F = M \cdot D \cdot W \qquad \qquad \text{where } W=M^{-1} \tag 3$$ (I like to use in that formulae $W$ instead of the LaTex-difficult $M^{-1}$)
Then powers of $F$ can exactly be expressed by powers of the diagonal elements $d_{k,k}$ of $D$ only, which are scalar and, as far as being positive, admit any fractional power $d_{k,k}^h$.
Of course, using that diagonalization to find the Schröder-function, we need $F$ for $f(x+\omega)-\omega$ With this, the Schöder-function is conceptually $$ V(x_0-\omega) \cdot M = V( \sigma(x_0-\omega)) \tag 4$$
For truncated matrices $F$ (and only such we can actually use) the result on the rhs is not really of Vandermonde-form, (but the diagonal in $D$ is of that form) and we can approximate $$ V(x_0-\omega) \cdot M = Y_0\\ V(x_h-\omega) \cdot M = Y_h\\ $$ and find h for the required $D^h$ - such that $ Y_0 \cdot D^h = Y_h$ - based on evaluation of the entries in $Y_0$ and $Y_h$
For bases $b$ in the range $1 \lt b \lt \exp(\exp(-1)) \approx 1.44$ (with sufficient distance to that borders) and $32\times 32$-matrices we get approximations which can surely be used to, say, 8 digits precision, and there are many alternative procedures discussed in the tetration-forum to improve that approximation to many more digits and to allow bases nearer to the borders of the range.
So focusing on your question again: we do not have an "exact" expression for the required iteration-height ( "how far is $\omega + \varepsilon$ from $x_0$ in terms of the iteration-height $h$?" ), but the concept of $\operatorname{slog}()$ (which implements actually the so-called "Abel"-function) and of the Schröder-function give at least ideally a "closed-form" as an evaluatable power series (like that for $\exp(x)$) but there's not yet an easy relation to common "closed" expression like $\sqrt[n]{x}, \log(x),\exp(x)$
A short view into an actual example, using Pari/GP.
I use the base for exponentiation $b=\sqrt{2}$ which is nicely in the range $1\lt b\lt e^{1/e}$ . The attracting fixpoint is $x_\infty = t=2$ such that $b^t=t$.
To get the recentered Schröderfunction I define first the recentered and rescaled function $$g_t(x)=f_b(x+t) -t = (2^{x/2}-1)\cdot 2 $$ From this I get the Carlemanmatrix G which is triangular and the top-left segment looks like
Here we see the coefficients for $\small (2^{x/2}-1)\cdot 2 = 0.693147 x + 0.120113 x^2 + 0.0138760 x^3 + 0.00120227 x^4 + ... $ in the second column.
The diagonalization of G using Pari/GP
M=mateigen(G);W=M^-1;D=W*G*Mwould give some matrix M where the norming of the columns is unfortunate; to have them a Carlemanmatrix again, it is required, that the diagonal has only ones. Norming M to this form(and W accordingly) I getWe can see, that the diagonal entries $d_{k,k}$ of D are the consecutive powers of $\log(t)$
The coefficients of the powerseries for the Schröder-function are now in the second column of M so we have
$$ \small \sigma(x) = x + 0.564723 x^2 + 0.299646 x^3 + 0.155932 x^4 + ...$$
Having this, we try an example. We set $x_0$ not too far from the fixpoint, so that $\sigma(x_0-t) $ converges, for instance $x_0= 2.3$. Next we choose some small eps, say $\varepsilon = 0.001$ such that the critical $x_h = t+\varepsilon=2.001$.
We get $\sigma_0 = \sigma(x_0-t) \approx 0.360409 $ and $\sigma_h = \sigma(t+\varepsilon-t) = \sigma(\varepsilon) \approx 0.00100057 $ The required iteration height $h$ to arrive from $x_0$ at $x_h$ should now become
$$ h= \log(\sigma_h / \sigma_0) / \log(\log(t))) \approx 16.0613 $$ and indeed get by the follwing check:
and we find, that the $\varepsilon$-point is between 16 and 17 iterations away from $x_0$.
A general remark: There is not yet an agreed solution for tetration in general. To show how there are different results for different methods of computing $x_h$ I've made a small essay for the discussion in our "tetrationforum". I used the base $b_4$ which is outside the range $1..\exp(\exp(-1))$ and $x_{-\infty}$ as fixpoint is complex and made pictures for that (few selected) methods making huge differences visible. The short presentation is here