I'm trying to make sense of a simple equation from Bingzhe Wu, Haodong Duan, Zhichao Liu, Guangyu Sun's paper SRPGAN: Perceptual Generative Adversarial Network for Single Image Super Resolution

What I don't understand is $\rho(x)=\sqrt{x^2 + \epsilon^2}$. How am I supposed to find $\epsilon$ (the error term I guess?) if $p$ only takes $x$ as input? Thanks
Here, $\epsilon$ is a parameter and it is tuned. In the "Training Details" section (4.1), we see that
The Charbonnier penalty is a differentiable variant of the L1 norm. The parameter $ϵ$ determines how closely the penalty resembles the L1 penalty (while remaining differentiable).