I am a bit confused with the notation of an optimization problem not being approximable within a factor of $(1+n^{\epsilon})$.
What exactly does this mean?
I am confused because if I (as a user of the algorithm) can choose the $\epsilon$ then I can approximate it as good as I want (given the n).
Most likely it means that there is no algorithm (drawn from some class of "nicely behaved" algorithms, which may be further specified nearby) such that this algorithm always finds a solution within a factor of $(1+n^\epsilon)$ of the true optimum, for an $\epsilon$ that cannot depend on $n$.
It is not quite clear from the wording you quote whether it means that there is no $\epsilon$ with a matching algorithm, or merely that there is some $\epsilon$ for which there is no algorithm.
Even if you had a "nice" algorithm for each $\epsilon$ (perhaps taking $\epsilon$ as an input) such that you can tune the precision as you want, this wouldn't necessarily mean that you could approximate to a fixed precision for all $n$ with "nice" behavior. Is often the case that the complexity analysis that allowed you to think of the algorithm as nicely behaved assumed that $\epsilon$ was constant. If you vary the $\epsilon$, the resource use of the algorithm may grow dramatically in ways that its ordinary big-O asymptotics for constant $\epsilon$ does not reveal.