I am reading a paper and do not understand the following
"We allow the space used by a solution to grow as $1/ \epsilon $, so as $ \epsilon ↓ 0$ the space blows up..."
I do not understand the idea of space growing as $1/ \epsilon $ (what does it mean?) and I also do not understand this notion : $ \epsilon ↓ 0$ (again what does it mean?).
The last thing I do not understand $O(1/ \epsilon ) = O(k)$. I know what big-Oh is, I am just not getting what it says in this case.
Paper link: http://web.stanford.edu/class/cs168/l/l2.pdf
Here is a piece of paper

The amount of space (memory) $S(\epsilon,k,n)$ required in their algorithm is $n\cdot O(1/\epsilon)$ but not $n\cdot O(1)$ as $\epsilon$ tends to $0$ thru positive values. (That is what the downward arrow is about: $\epsilon$ decreasing to $0.$) Since $S$ also depends on $k$, they consider the case where $k$ goes to $\infty$ as $\epsilon$ goes to $0$, restricted to $1/\epsilon=O(k)\land k=O(\epsilon)$,i.e. $O(\epsilon)=O(k)$, which they consider OK (palatable). I think.Their notations are not quite standard.