I have a question about the Machine-epsilon
let $ \text{eps}$ be the relative machine error, so that $ \text{eps}=\frac{b^{1-m}}{2}$ with $b >1$ and the mantise $m$
I have to prove that $$ \text{eps} = \inf\{δ > 0 : fl(1 + δ) > 1\}.$$
Here $fl(x)$ is the chopping/rounding off function
If i understand correctly , $\text{eps}$ is in this case the smallest positive number so that $1+\text{eps} >1$.
So what i have done is to writing an algorithm on Python
eps=1.0
while(1+eps >1)
eps=\frac{eps}{2}
eps=eps*2
Is that correct or is there an other elegant method to prove.
Let $\mathcal{F}$ is the set of floating point numbers. Machine epsilon for a floating point number $x$ is defined as $$\text{eps}(x) = \text{next}(x) - x$$ where $\text{next}(x)$ is the next higher floating point number higher than $x$: $$\text{next}(x) = \text{inf}\{y \in \mathcal{F}: y > x\}$$
The machine epsilon $\varepsilon$ is defined as $\varepsilon = \text{eps}(1)$.
Let $0 < \delta < \varepsilon$. According to definition of $\varepsilon$, there is no floating point number in the interval $(1, 1+\varepsilon)$. Hence $\text{fl}(1+\delta) \in\{1, 1+\varepsilon\}$.
For $\delta = \varepsilon/2$, $\text{fl}(1+\delta) = 1$ according to rounding rules, i.e. round to nearest, ties to even. Distance from $1+\varepsilon/2$ to $1$ and $1+\varepsilon$ is the same, but the tie rule preferes $1$. For any $0< \delta < \varepsilon/2$, $\text{fl}(1+\delta) = 1$ since distance from $1+\delta$ to 1 is lower than distance to $1+\varepsilon$. For $\varepsilon/2 < \delta < \varepsilon$, $\text{fl}(1+\delta) = 1+\varepsilon > 1$, since distance from $1+\delta$ to 1 is higher than distance to $1+\varepsilon$.
Therefore $$ M = \text{inf}\{\delta > 0: \text{fl}(1+\delta) > 1\} = \text{inf}\{\delta > 0: \delta > \varepsilon/2\} = \frac{\varepsilon}{2}$$
It is not clear how you define $\text{eps}$. If according to your definition $\text{eps} = \varepsilon/2$, then your claim is true, otherwise false.