We assume that $n>0$. I would like to get some feedback on my proof of this statement.
When we want to show that $f(n) = O(g(n))$, we need to find some positive constants $c, n_{0}$ such that for every $n \geq n_{0}$ the following holds: $f(n) \leq cg(n)$. Here $f(n) = \log_{2}(n)$ and $g(n) = n^{1-\epsilon}$, so we want to find some positive constants $c,n_{0}$ such that $\log_{2}n \leq cn^{1-\epsilon}$ for every $n\geq n_{0}$.
For $\log_{2}n \leq cn^{1-\epsilon}$ to be true, then $\log_{2}(\log_{2}n) \leq \log_{2}(cn^{1-\epsilon})$ must be true. Doing the math from there we have:
$\log_{2}(\log_{2}n) \leq \log_{2}(c) + \log_{2}(n^{1-\epsilon}) = \log_{2}(c) + (1-\epsilon)\log_{2}(n)$
because $n$ is positive, $\log_{2}(n)$ will also be positive so we have
$\log_{2}(\log_{2}n) \leq (\log_{2}(c) + (1-\epsilon))\log_{2}(n)$
now at this point, someone could suggest that all we need to do is find a constant $c$ such that $(\log_{2}(c) + (1-\epsilon)) \geq 1$. This happens when $c = 2^{\epsilon}$.
However shouldn't we also prove that $\log_{2}(\log_{2}n) \leq \log_{2}(n)$ for every $n>n_{0}$? I have seen many proofs where they just pick an $n_{0}$ without formally going over the argument. My claim is that we can pick $n_{0} = 4$. So now we need to prove that $\log_{2}(\log_{2}n) \leq \log_{2}(n)$ for every $n>4$ and together with the constant that we picked above this will complete the proof of the original statement.
Let $h(n) = \log_{2}(\log_{2}n) - \log_{2}(n)$. We have that $\frac{\partial h}{\partial n} = \frac{1-\ln(n)}{n\ln(2)\ln(n)}$. This derivative becomes 0 when $n = e$. Since $h(4) = -1 < 0$ this means that $h(n) < 0$ for every $n>=4$. So $\log_{2}(\log_{2}n) \leq \log_{2}(n)$ for every $n>=4$, which means that we can pick $n_{0} = 4$.
You made one small mistake in the proof.
As we are working backwards from the definition, we have to take care that the deductions we make are valid when applied backwards. This is tricky when order relations are involved!
Concretely, you are saying that since $$\log_2(\log_2 n)\le\log_2(c)+(1-\epsilon)\log_2(n) \tag{1}$$ and $$\log_2(c) \le \log_2(c)\log_2(n) \tag{2}$$ then it must be the case that $$\log_2(\log_2 n)\le\log_2(c)+(1-\epsilon)\log_2(n)\le\log_2(c)\log_2(n)+(1-\epsilon)\log_2(n)=(\log_2 c + (1- \epsilon))\log_2 n\tag{3}$$.
This is of course literally valid, but we are not interested in having (1) as a premise, but rather as a conclusion!
I want to suggest as an alternative way of proving the result: to study the asymptotic behavior of the quotient $\frac{\log_2(n)}{n^{1-\epsilon}}$.
If the limit of this quotient goes to $0$ as $n$ goes to $\infty$, then it must be the case that $\log_2(n)\in \mathcal{O}(n^{1-\epsilon})$ (can you prove this?).
And in fact this is what happens!
$$ \lim_{n\to \infty} \frac{\log_2(n)}{n^{1-\epsilon}} = \lim_{n\to \infty} \frac{\ln 2 \frac{1}{n}}{(1-\epsilon)n^{1-\epsilon -1}}=\lim_{n\to \infty} \frac{\ln(2)}{(1-\epsilon)}n^{\epsilon -1} = 0 $$
Where the first equality comes from L'Hôpital's rule and the limit goes to zero as $\epsilon < 1$.