While pondering over this question, I came across another interesting one. I am familiar with infinite tetration and its convergence over the reals. Nevertheless, when I saw this power tower, I couldn't help but wonder which distributions of $\pm$ signs make this converge or diverge.
For instance: $$e^{-e^{-e^{...}}}={}^\infty(e^{-1})< \infty \\ e^{e^{e^{...}}}={}^\infty e \to \infty$$
Let's define $\forall n\geq 1,\epsilon_n \in {\pm 1}$ as the nth sign of the power tower $$P_\epsilon=e^{\epsilon_1e^{\epsilon_2e^{...}}}$$
defined recursively as $$ [P_\epsilon]_1(x) =e^{\epsilon_1 x}\\ [P_\epsilon]_{n+1}(x) = [P_\epsilon]_{n}(e^{\epsilon_{n+1} x})\\ [P_\epsilon]_n(e) = [P_\epsilon]_n\\ $$
Then, evidently, the first terms of $\epsilon_n$ are irrelevant, only the assymptotic behaviour is important. If we take $$\epsilon_n=\begin{cases}1&\text{for } n\equiv 0 \mod k \\-1 &\text{else }\end{cases}$$
would this converge for some $k$?
Lastly, one can conjure of all sorts of patterns for these $(\epsilon_n)_{n\in \mathbb{N}}:$ what if the $(-1)$s are only for prime indexes? What if $\epsilon$ is $−1$ with probability $(1−p)$ and $1$ with probability $p$?
I think this last question is very interesting, but probably hard to solve. It would seem an important threshold occurs if the expected value $\mathbb{E}(\epsilon)=0$, or $p=\frac{1}{2}$, as the limit case for convergence is $${}^\infty(e^{e^{-1}})<\infty$$
My guess is, for $\epsilon$ with $\mathbb{E}(\epsilon)>0$ it will diverge a.s. and for $\mathbb{E}(\epsilon)<0$ it will converge a.s., but I have no idea on how to prove it.
This reminds me a lot of Kolmogorov's three series theorem, although I doubt it can be solved in a similar manner. I hope I haven't missed something that would make this problem trivial, that would be very disappointing. Thanks! (feel free to edit to make it look better, or to add more appropriate tags)
EDIT:This question has been edited to account for the non-associativity of exponentiation, a fact I, somehow, seemed to have momentarily forgotten.

Um... I'm not sure if I'm missing something, but I believe that if any of the $\epsilon_n=-1$ for $n\ge 1$, then the value of the expression converges. Allow me to explain why.
Before I start, this is my first bounty, so I'm not really sure what the rules and standards of proof are. I know I could've gone into more detail, but I don't think it would've helped with understanding the proof, and I think the lemmas I skipped over would be fairly obvious/intuitive and tedious to prove. I'm open to feedback, but I think my response answers the question well.
Conventions:
This means that $\exp([a,b])=[e^a,e^b]$ and $-[a,b]=[-b,-a]$, where $e^{-\infty}=0$ and $e^{\infty}=\infty$.
We are studying the "reverse-sequence" of sets $a_{n-1}=\exp(\epsilon_na_n)$, where $\epsilon=(\epsilon_1,\epsilon_2\cdots)$ and $\epsilon_n\in\{-1,1\}$ for $n\ge 1$. My understanding of convergence is that if $a_{\omega}=\overline{\mathbb{R}}$ for some transfinite number $\omega$, then $a_0$ will be unique, containing a single value.
There are three cases: $\epsilon$ contains no $-1$s, $\epsilon$ contains finite $-1$s, and $\epsilon$ contains infinite $-1$s. If $\epsilon$ contains no $-1$s, then $a_0=\exp(\exp(\exp(\cdots)))$. It is pretty clear that $a_0=[\infty]$. However, as I will prove, this is the only case where $a_0$ diverges in any fashion.
If $\epsilon$ contains a finite number of $-1$s, then let $n$ be the last number such that $\epsilon_n=-1$. We know that $a_n=\exp(\exp(\exp(\cdots)))=[\infty]$, so $a_{n-1}=\exp(\epsilon_na_n)=\exp(-[\infty])=[0]$. Then, $a_0$ is a finite sequence of exponentials away from $a_{n-1}$, so $a_0$ is also finite and unique.
Now, let's consider the case where there are an infinite number of $-1$s. Let $\omega$ be some transfinite number where $\epsilon_{\omega}=-1$. By "transfinite" I mean that if $b(\omega)_n=a_{\omega-n}$, then $a_0=\lim_{\omega\to\infty}b(\omega)_{\omega}$. (Yeah, it's not great notation, but I'm sure you can figure out what I mean.)
We know that $a_{\omega+1}\subseteq[-\infty,\infty]$. Assuming the worst case scenario, let $a_{\omega+1}=[-\infty,\infty]$. Then, $\epsilon_{\omega+1}a_{\omega+1}=\pm[-\infty,\infty]=[-\infty,\infty]$, and $a_{\omega}=\exp(\epsilon_{\omega+1}a_{\omega+1})=\exp([-\infty,\infty])=[0,\infty]$. This means that $a_{\omega-1}=\exp(\epsilon_{\omega} a_{\omega})=\exp(-[0,\infty])=[0,1]$.
Let $\Omega=(\Omega_0,\Omega_1\ldots)$ be the sequence of values less than or equal to $\omega$, in descending order, such that $\epsilon_{\Omega_i}=-1$. Also, let's define a function $E:\left(\{1,2\ldots\}, [0,\infty]\right)\to [0,1]$ such that: $E(1,x)=e^{-x}$ and $E(d+1,x)=E(d,e^x)$ for $d\ge 1$. Then, it is easy to see that $a_{\Omega_{i+1}-1}=\exp(-\exp(\exp(\cdots\exp(a_{\Omega_i-1}))))=E(\Omega_{i+1}-\Omega_i,a_{\Omega_i-1})$. I claim that not only is $\mu(a_{\Omega_{i+1}-1})<\mu(a_{\Omega_i-1})$, but the decrease is large to justify that $\lim_{i\to\infty}\mu(a_{\Omega_{i}-1})=0$, for any difference $d=\Omega_{i+1}-\Omega_i$.
This is because $E(d,x)$ is Lipschitz-continuous for all $d$ on $[0,1]$, with Lipschitz coefficient $K=1$. In fact, for $d > 1$, $K$ is much smaller. Unfortunately, when $d=1$, $e^{-x}$ has a slope of $-1$ at $x=0$. Because the Lipschitz bound is only touched at one point on the edge, we can write that $|x-y|>K|E(d,x)-E(d,y)|>|E(d,x)-E(d,y)|$ for all $x,y\in[0,1]$. This means that $\mu(E(d,a))<\mu(a)$, for any set $a\subseteq[0,1]$. Furthermore, because $E$ is a smooth function, there can be no "catch-points" where the decay of the size of $a$ would stop. Therefore, after an infinite number of passes through $E$, the size of $a$ would approach $0$.
This means that if we start from progressively larger values of $\omega$, the size of $a_0$ will be calculated to be smaller and smaller, so that in the limit $a_0$ will have size $0$. Therefore, $a_0$ is unique. Therefore, the value of the expression $\exp(\epsilon_1\exp(\epsilon_2\cdots))$ has a unique value that is constrained in a convergent manner by the set $a_0$ as $\omega$ approaches $\infty$, when there are infinitely many $-1$s in $\epsilon$.
We see that regardless of whether there are finitely many (at least one) or infinitely many $-1$s in $\epsilon$, the expression converges.