My textbook states the following:
If $x\in \mathbb R$ such that $x_{\text{min}}\leq |x| \leq x_{\text{max}}$, then $$fl(x) = x(1+\delta) \text{ with } |\delta | \leq u$$ where $$u = \frac12 \beta^{1-t} = \frac12 \epsilon_M$$ is the so-called roundoff unit (or machine precision).
I do not understand why $fl(x)$ is defined the way it is. Where does $(1+\delta)$ come from? Why is $u$ defined the way it is?
I can only understand that $fl$ is the map $\mathbb R \mapsto \mathbb F$ where $\mathbb F$ is the set of floating point numbers, because what $fl$ is supposed to do is round digits up or down based on the value of the next digit -- based on chopping the number $x$ or rounding the number $x$ (or a combination of the two).
Due to the binary nature of the floating point numbers, all errors are proportional to the errors in $[1,2]$ as $ fl(x)=2^d·fl(2^{-d}x)$, $2^d\le x\le 2^{d+1}$ and thus $$\delta=\frac{x-fl(x)}{x}=\frac{2^{-d}x-fl(2^{-d}x)}{2^{-d}x}.$$
Inside that interval $[1,2]$ there are only a finite number of floating point numbers, so the minimal distance to a floating point number is a continuous function. Thus the relative distance $\delta(x)=\frac{x-fl(x)}{x}$ has a maximum $\mu$ on the interval $[1,2]$. What that maximum is depends on the rounding mode.