The problem is given by:
$$ \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\|x \right\|}_{2} $$
Where $y$ and $x$ are vectors. $\|\cdot\|_2$ is Euclidean norm. In the paper Convex Sparse Matrix Factorizations, they say the closed form solution is $x=\max\{y-\lambda \frac{y}{\|y\|_2}, 0\}$. I don't know why $x$ need to be non-negative. I think it may come from $\|x\|_2=\sqrt{x^Tx}$. But I cannot derive it. Please help.
The statement appears in the last paragraph line 2 on page 5 of the Paper.
That's not what the referenced paper says. It gives an expression which is equivalent to the proximal operator of the $\ell_2$ norm:
$$ \DeclareMathOperator*{\argmin}{arg\,min} \argmin_x \frac{1}{2}\|x-y\|^2 + \lambda\|x\| = \max(\|y\|-\lambda,0)\frac{y}{\|y\|} $$ Note the vector $y$ is not inside the maximum.
I'll sketch a proof. We can decompose $x$ as sum of two components, one parallel to $y$ and one orthogonal to $y$. That is, let $ x = t \frac{y}{ \| y\| } + z $ where $y^T z=0$. Then the objective reduces to:
$$\frac{1}{2}\|x-y\|^2 + \lambda\|x\| = \frac{1}{2}\|z\|^2 + \frac{1}{2}(t-\|y\|)^2 + \lambda \sqrt{t^2 + \|z\|^2}$$ Clearly the expression is minimized when $z=0$, so the problem reduces to a 1-dimensional problem: $$ \min_t \frac{1}{2}(t-\|y\|)^2 + \lambda |t| $$ Then it's a basic exercise in calculus to show that the objective is minimized when $t=\max(\|y\|-\lambda,0)$.