What is the difference between $sign(x)$ and $sgn(x)$ in mathematics? I am confused because I know they differ in how they handle zero, but I have found several publications in computer science that use the signum function and the sign function interchangeably. I would like to know what mathematics says about their differences.
The defintion of $signum$ is:
$\mathrm{sgn}(x) = \begin{cases} -1 & \text{if } x < 0 \\ 0 & \text{if } x = 0 \\ 1 & \text{if } x > 0 \end{cases}$
whereas $sign$ is:
$ \mathrm{sign}(x) = \begin{cases} -1 & \text{if } x < 0 \\ 1 & \text{if } x >= 0 \end{cases} $
I do know that they are very similar, and the rarity of encountering zero would make them interchangeable in most applications. However, I want solid knowledge to support my argument in my own paper. How rare would it have to be for it to be acceptable to use them interchangeably in the context of mathematics? and also how we can use them interchangeably in the context of mathematics if the $x$ is integer and its range is from -255 to 255.