Let $X_n$ be independent sequence of random variables with exponential distributions with parameters $\lambda_n$. What are the necessary conditions for sequence $\lambda_n$ for $X_n$ to be convergent almost surely to 0?
I already know(due to Borel-Cantelli's Lemma) that both necessary and sufficient condition is that for arbitrary $\varepsilon > 0$ $\sum_{n=1}^{\infty} \mathbb{P}(|X_n| > \varepsilon)$ must converge.
For this to be true, we fix $n > 0$ (no need to wander in reals, $\frac{1}{n}$ will be enough) and look at: \begin{align} \sum_{n=1}^{\infty} \mathbb{P}(|X_n| > \frac{1}{n}) &= \sum_{n=1}^{\infty} 1 - \mathbb{P}(|X_n| \leq \frac{1}{n}) = \sum_{n=1}^{\infty} 1 - \mathbb{F}_X(\frac{1}{n}) \\ &= \sum_{n=1}^{\infty} 1 - (1 - e^{-\lambda_n \frac{1}{n}}) = \sum_{n=1}^{\infty} e^{-\frac{\lambda_n}{n}} \end{align}
I got stuck here - I think the next step would be somehow deducing what $\frac{\lambda_n}{n}$ must converge to, to make the entire series convergent. How can we deduce it though?
Note this is a similar question to that one, but I am looking for the necessary conditions for $\lambda_n$.
Firstly, an error in your thinking: It is not at all the same thing if you set $\varepsilon_n$ as a sequence depending on $n$.
The mechanics of this will become clearer if you consider what B-C is saying. Since $X_n \ge 0$ surely, it suffices to argue that $\limsup X_n = 0$ a.s.. What does this mean? By definition, it is that for every $\varepsilon > 0,$ there is some $N_\varepsilon$ such that the event $\{\forall n> N_{\varepsilon}, X_n < \varepsilon\}$ occurs a.s. B-C is in essence using the union bound for this - $$ P(\exists n > N: X_n > \varepsilon\} \le \sum_{n > N} P(X_n > \varepsilon)$$
and convergence of the upper bound $\sum_{n = 1}^\infty P(X_n > \varepsilon)$ is enough for this to become arbitrarily small after some $N$.
In fact, setting $\varepsilon_n = 1/n$ is equivalent to studying $\limsup n X_n$, and the sufficient condition you have set up tells you that this is bounded by $1$ a.s. (Exercise - write this out explicitly). Certainly this implies that $X_n$ go to $0$ a.s., but this is not all such $X_n$.
Next, the necessary condition:
For independent $X_n$, B-C is the complete story. This is the so-called second Borel-Cantelli Lemma, which for this setting gives :if the $X_n$ are independent, and for some $\varepsilon > 0$ $\sum P(X_n > \varepsilon) = \infty,$ then $\limsup X_n \ge \varepsilon$ a.s. - which implies that $X_n > \varepsilon/2 > 0$ infinitely often. In particular, this means that the condition $$ \forall \varepsilon > 0: \sum e^{-\lambda_n \varepsilon} < \infty$$ is both necessary and sufficient.
It seems you want what growth rate $\lambda_n$ as a function of $n$ is necessary for this to happen. This is one of those things that doesn't have a neat answer. For instance, clearly $\lambda_n = \log n$ doesn't suffice, because for small enough $\varepsilon$ $e^{-\varepsilon \log n} > 1/n$. Also clearly, $\lambda = \Omega( \log^{1 + c} n)$ is sufficient for any $c > 0$, since then eventually $e^{-\lambda_n \varepsilon} \le \frac{1}{n^{\varepsilon \log^c n}} \le \frac{1}{n^2}$ (taking $n > \exp((2/\varepsilon)^{1/c}$)
We can start refining this. Take $\mu_n := \frac{\lambda_n}{\log n}$. The same reasoning as above tells us that if $\liminf \mu_n = \infty,$ then we have convergence for every $\varepsilon$, while if $\limsup \mu_n < \infty,$ then we have divergence.
The boundary case of $\liminf \mu_n < \infty, \limsup \mu_n = \infty$ can have both behaviours - if $\mu_n = 1 + n\mathbf{1}\{ n \textrm { is odd}\},$ then we'll have divergence for small $\varepsilon,$ but if $\mu_n = 1 + n\mathbf{1}\{ n \textrm{ is not a power of $2$}\}$ then we'll have convergence for every $\varepsilon$. One can presumably start refining this as well, but its already quite messy. The B-C condition is a neat way to encapsulate all of this.