I was thinking of a problem and have an answer through computer programming, but am unable to prove it mathematically. Start with the following: $$\frac{1}{2}\bigg(\frac{1}{2} + \frac{1}{3}\bigg)\approx 0.41666$$ Replace every unit fraction being summed in the parentheses (in this case, the inner 1/2 and 1/3) with $$\frac{1}{n}\bigg(\frac{1}{2} + \frac{1}{3} + ... + \frac{1}{n+1}\bigg)$$ where $n$ is the denominator of the unit fraction. If we apply this process once, we get $$\frac{1}{2}\bigg(\frac{1}{2}\bigg(\frac{1}{2} + \frac{1}{3}\bigg)+\frac{1}{3}\bigg(\frac{1}{2} + \frac{1}{3}+\frac{1}{4}\bigg)\bigg)\approx 0.3888$$ and if we apply it again, we get $$\frac{1}{2}\bigg(\frac{1}{2}\bigg(\frac{1}{2}\bigg(\frac{1}{2} + \frac{1}{3}\bigg) + \frac{1}{3}\bigg(\frac{1}{2} + \frac{1}{3}+ \frac{1}{4}\bigg)\bigg)+\frac{1}{3}\bigg(\frac{1}{2}\bigg(\frac{1}{2} + \frac{1}{3}\bigg) + \frac{1}{3}\bigg(\frac{1}{2} + \frac{1}{3}+ \frac{1}{4}\bigg)+\frac{1}{4}\bigg(\frac{1}{2} + \frac{1}{3}+ \frac{1}{4}+ \frac{1}{5}\bigg)\bigg)\bigg)\approx 0.37754$$
As we repeat this process, the result seems to approach $e^{-1}$.
If anyone could prove this or direct me to a proof of this, that would be wonderful!
Consider the following random process:
and repeat for however many steps. Your expression is ${\mathbb E}[1/n_k]$ - that is, it is the expected value of $1/{n_k}$. This is a Markov process and we can apply our typical analysis tools to it*.
In particular, as $k$ increases, irrespective of our starting position, the distribution of $n_k$ approaches a unique steady state distribution - that is, a distribution invariant under the step.
Let $p_m$ be the probability that, for large $k$ (i.e. in the limit), we have $n_k=m$. It must be a steady state in the sense that $$p_m=\sum_{\ell=m-1}^{\infty}\frac{1}{\ell}\cdot p_{\ell}$$ which just says that, if we started in this distribution, we would stay in it.
Now, we can construct a probability generating function for the steady state; we'll mess up the indexing slightly for convenience. Let: $$f(x)=p_2x+p_3x^2+p_4x^3+\ldots.$$ Note that we can use formal integration to get $$\int_0^x f(t)\,dt=\frac{p_2}2x^2+\frac{p_3}3x^3+\ldots$$ Then, noting $\frac{1}{1-x}=1+x+x^2+\ldots$ and slipping in an extra $x$ for fun, we can define a new function $g(x)$ as follows: $$g(x)=\frac{x\int_0^x f(t)\,dt}{1-x}=\frac{p_2}2x^3+\left(\frac{p_2}2+\frac{p_3}3\right)x^4+\left(\frac{p_2}2+\frac{p_3}3+\frac{p_4}4\right)x^5+\ldots.$$ Why do this? Because if we let $C=\frac{p_2}2+\frac{p_3}3+\ldots$ - that is, $C$ is exactly the quantity we're looking for - we note that $$\frac{Cx}{1-x}-g(x)=\sum_{m=1}^{\infty}\sum_{\ell=m}^{\infty}\frac{1}{\ell}\cdot p_{\ell}\cdot x^m$$ Which we recognize as $f(x)$. Thus, we get an equation: $$(1-x)f(x)=Cx-x\int_0^x f(t)\,dt$$ Now, I'll leave out the details, because I'm not good at calculus, but if you differentiate both sides of the equation twice, you get a degree two differential equation, and if you plug it into Mathematica, you get a solution: $$f(x)=c_1\cdot xe^x+c_2\cdot \frac{e+xe^x\operatorname{Ei}(x)}e$$ where $c_1,c_2$ are as of yet unknown constants. However, we know that $f(0)=0$ from definition of $f$ and $f(1)=1$ since $p_2+p_3+\ldots=1$ since these are probabilities. This fixes $c_2=0$ and $c_1=\frac{1}e$.
Thus $f(x)=\frac{1}exe^x$. This looks promising already! Then, observe that $\int_0^1 f(t)\,dt=C={\mathbb E}[1/n]$ just by looking at the series we derived earlier for the integral! So, then we just integrate that expression and we see $C=1/e$. Hurray! It worked! As a bonus, we know the steady state probabilities are just $p_n=\frac{1}{e(n-2)!}$ for $2\leq n$.
(*I'm ignoring convergence issues, which are relevant since this Markov chain has infinitely many states. The generating function stuff can be justified after the fact, despite involving serious convergence problems - uniqueness of a solution is not so hard to establish - the usual argument of uniqueness and existence of a steady state still gives us uniqueness for infinite Markov processes - and the fact that $f(x)=\frac{1}exe^x$ is a solution is also not too hard to show. Convergence of the chain to its steady state brings more serious issues - I think they can be solved by bounding the probability that $n_k$ is large, which shouldn't be too hard, since the probability that $n_{k+1}>n_k$ given $n_k$ is just $\frac{1}{n_k}$)