I am trying to wrap my head around errors in floating point calculations. Let me denote absolute error as follows: $e = |x - \hat{x}|$, where $x$ is the exact number and $\hat{x}$ is its floating point representation. Assume round-to-nearest.
Now, the first thing I would like to understand is this inequation: the absolute error doesn't exceed half machine precision times the absolute value. Is it actually correct?
$$|x - \hat{x}| \leq \frac{\epsilon_1}{2}|\hat{x}|$$
Here $\epsilon_1$ denotes machine precision and, for IEEE754 float, is equal to $2^{-126} \times 0.0\dots01$. I understand it when the inequality looks like this $|x - \hat{x}| \leq \frac{\epsilon_1}{2}$ but where does the module on the right come from?
Machine $\epsilon$ represents the relative, not absolute, error. You have a certain number $n$ of bits in the mantissa. The relative error is then about $2^{-n}$. If your mantissa is multiplied by $2^{100}$, the relative error stays the same, but the absolute error is multiplied by the same $2^{100}$. That is why you have the $\hat x$ on the right. The modulus is just in case $\hat x$ is negative