Trying to understand a simple function

1.6k Views Asked by At

I'm trying to make sense of a simple equation from Bingzhe Wu, Haodong Duan, Zhichao Liu, Guangyu Sun's paper SRPGAN: Perceptual Generative Adversarial Network for Single Image Super Resolution Formula from above paper

What I don't understand is $\rho(x)=\sqrt{x^2 + \epsilon^2}$. How am I supposed to find $\epsilon$ (the error term I guess?) if $p$ only takes $x$ as input? Thanks

1

There are 1 best solutions below

2
On BEST ANSWER

Here, $\epsilon$ is a parameter and it is tuned. In the "Training Details" section (4.1), we see that

We train our model from scratch with ADAM optimizer by setting $\beta_1 = 0.9, \beta_2 = 0.99,$ and $\epsilon = 10^{−8}$.

The Charbonnier penalty is a differentiable variant of the L1 norm. The parameter $ϵ$ determines how closely the penalty resembles the L1 penalty (while remaining differentiable).