I am studying over my notes, and there is something I don't understand about $e_m$. We represent the floating point numbers as $1.d_1d_2...d_t \times \beta^e$. Now, my professor defines $\epsilon_m$ as the smallest $x$ such that $fl(1+x) > 1$. But then she writes
$$\epsilon_m = \beta^{1-t} \hspace{1cm} \text{ (for "chopping")}$$ and
$$e_m = \dfrac 12 \beta^{1-t} \hspace{1cm} \text{ (for rounding)}$$ However, I think these should be $\beta^{-t}$ and $\dfrac 12 \beta^{-t}$, respectively.
The number $1$, and the next number right after, are
$$1.00 \cdots 00 \times \beta^0$$ and $$1.00 \cdots 01 \times \beta^0$$ where the $1$ is in the $t^{th}$ decimal place. Therefore their difference is $\beta^{-t}$ so if $x = \beta^{-t}$, then $1+x$ is itself a floating point number, so $fl(1+x) = 1+x>1$.
Am I wrong, or are the notes wrong?
Thank you very much.
Unfortunately, you didn't introduce the variables' meanings but I assume, $\beta$ should be the base of the floating point number (which is typically 2). I think, $t$ is used by your teacher/professor to denote the number of precision bits in the bit string representation of the floating point format. The precision bits are defined to also include
the sign bit[one more bit, see below] (besides the fraction/significand bits) which is subtracted away with the -1 in the exponent of the machine epsilon: $\beta^{-(t-1)}$.If you refer to Wikipedia it will give you the same formula.
EDIT: as conditionalMethod pointed out, $t$ is the number of bits including the first implicit 1-bit before the decimal point which is not stored in the floating point format (this bit also counts as precision bit since it also constitutes the represented value). The length of $t$ is not due to the sign bit.
I'd like to add that the 1st one bit is implicit if the floating point number is said to be "normalized". If the biased exponent field has its minimum value of all bits zero, the floating point number is said to be "denormalized". Denormalized floating point numbers store the first 1-bit explicitly in the significand bit field behind the decimal point (hence starts with an implicit 0-bit) to represent even smaller numbers than it would be possible with an implicit 1-bit in front. However, the $t$ as is used for the machine epsilon seems to me to be defined for normalized numbers. When we had floating point numbers in the lecture (which was not hold in English language) I didn't hear of the machine epsilon. So at least according to the formula, the machine epsilon apparently applies to normalized numbers specificly.