Reading "Recurrent Experience Replay in Distributed Reinforcement Learning" paper, the authors use the function $h$ which can re-scale the q-estimates the network receives.
The authors say $h$ is an invertable function, but I can't even get past first step of inverting it.
$$h(x) = \operatorname{sign}(x)(\sqrt{|x|+1} -1) + \epsilon x$$ where $\epsilon$ is a very small number, for example $0.0001$ and $x$ is any real number.
The first step of finding $h^{-1}$, is to swap it with its argument:
$$x =\operatorname{sign}(h)(\sqrt{|h|+1} -1) + \epsilon h$$
But then, how do I re-arrange it to express it in terms of $x$ once again?
paper name: Recurrent Experience Replay in Distributed Reinforcement Learning, page 3
Edit 1 (edit removed)
Edit 2 As @dcolazin answer shows, re-arrange the original equation into:
$$\epsilon^2 X^2 -(2\epsilon(H+1)+1)X + (H+1)^2-1=0$$
Now that it's in the form of $at^2 + bt + c =0$ we can apply the discriminant rule, to get 2 expressions:
$$t_{1,2} = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$$
Taking the above equation, we get
$$X_{1,2} =\frac{2\epsilon H + 2\epsilon + 1 \pm \sqrt{4\epsilon^2 + 4\epsilon H + 4\epsilon + 1}}{2\epsilon^2}$$
applying the $sign_x = sign_h$ and $x = sign_xX$ trick, we get:
$$x = \frac{sign_x \cdot \bigg( 2\epsilon |h| + 2\epsilon + 1 \pm \sqrt{4\epsilon(\epsilon + |h| + 1) + 1}\bigg) }{2\epsilon^2}$$
in other words,
$$h^{-1}_x = \frac{sign_h \cdot \bigg( 2\epsilon |h_x| + 2\epsilon + 1 \pm 2 \sqrt{\epsilon(\epsilon + |h_x| + 1) + 0.25}\bigg) }{2\epsilon^2}$$
simplifying further:
$$h^{-1}_x = \frac{sign_h \cdot \bigg( \epsilon |h_x| + \epsilon + 0.5 \pm \sqrt{\epsilon(\epsilon + |h_x| + 1) + 0.25}\bigg) }{\epsilon^2}$$
notice $\pm$
Edit 3 Somehow, plotting my function doesn't appear on the graph, and plotting the function of @dcolazin looks like a straight line.
Instead, I would expect them to curve-upwards, cancelling out the effect of $h$
Edit 4 (conclusion)
As was pointed out in the comment below by @dcolazin, the inverse function should have just a usual argument. So, renaming things, we get:
$$h^{-1} = \frac{sign_x \cdot \bigg( \epsilon |x| + \epsilon + 0.5 - \sqrt{\epsilon(\epsilon + |x| + 1) + 0.25}\bigg) }{\epsilon^2}$$
This is the final inverse function, which takes into the account epsilon. However, as @dcolazin has shown, if epsilon is negligible you should use the version he told, since it's more performant on hardware:
$$h^{-1} \approx sign_x \cdot \frac{(|x|+1)^2-1}{2\epsilon(|x|+1)+1}$$
Where $sign_x$ is of course $\frac{x}{|x|}$ which is either 1 or -1

Let me call $s_x, s_h$ the sign of $x,h$.
$$h = s_x(\sqrt{|x| +1 } -1) + \epsilon x$$
Then $$s_x = s_h \Rightarrow s_h h = s_x (s_x(\sqrt{|x| +1 } -1) + \epsilon x) \Rightarrow \\ |h| = \sqrt{|x| +1 } -1 + \epsilon |x| \Rightarrow |h|+1 = \sqrt{|x| +1 } + \epsilon |x|$$
$$|h|+1-\epsilon |x| = \sqrt{|x|+1} \Rightarrow (|h|+1-\epsilon |x|)^2 = |x|+1 \Rightarrow$$
$$(|h|+1)^2 + \epsilon^2 x^2 - 2\epsilon(|h|+1)|x| = |x|+1 \Rightarrow$$
Call $X=|x|, H = |h|$:
$$\epsilon^2 X^2 -(2\epsilon(H+1)+1)X + (H+1)^2-1=0 \Rightarrow$$
is a second degree equation that you can resolve. Remember that $s_x = s_h$ and you can find $x = s_xX$.
But if $\epsilon^2$ is trascurable, then $X \sim \frac{(H+1)^2-1}{2\epsilon(H+1)+1}$.