What is the convention for rounding operations in floating-point arithmetic?
Let's suppose we are using 64 bit precision and want to sum the numbers $1$ and $2^{-53}$ in floating point arithmetic. That is, we want to perform the following operation:
$1 + 2^{-53}$
We consider their representation:
$$1 \ \ \ \ \ = 2^0 \ \ \ \ \times 1. \underbrace{0000 \dots 0000}_\text{52 digits} $$
$$2^{-53}=2^{-53} \times 1.\underbrace{0000 \dots 0000}_\text{52 digits}$$
Then a shift is performed in order to sum, so we sum like:
$$ 2^0 \times 1. \underbrace{0000 \dots 0000}_\text{52 digits} $$
$$ + \ \ \ 2^{0} \times 0.\underbrace{0000 \dots 0000}_\text{52 digits}1 \ \ \ \ $$
Now the result is $$2^{0} \times 1.\underbrace{0000 \dots 00001}_\text{53 digits}$$ so a rounding must be performed. Both rounding up and rounding down produces the same rounding error, so what is actually done? I have tried this with Matlab and it rounds down to $2^{0} \times 1.\underbrace{0000 \dots 0000}_\text{52 digits}$.
How can I know how the machine rounds the number in any situation?
I guess you assume IEEE format (otherwise you are really lost without the detailed system spec). Two quotes from IEEE Std 754-2008:
"Rounding takes a number regarded as infinitely precise and, if necessary, modifies it to fit in the destination’s format while signaling the inexact exception, underflow, or overflow when appropriate."
"An implementation of this standard shall provide roundTiesToEven and the three directed rounding attributes."
In order to know the rounding environment you have to query the rounding mode and the precision from the floating point unit, normally there are library functions for this.