It is known that if a and b have an exact representation in given t-bit arithmetic, then fl(a+b) = (a+b)(1 +$\delta$), where fl() denotes the result of an operation performed by a computer, and |$\delta$| $\leqslant$ u and $u = 2^{-t-1}$ denotes the unit round-off error.
Assume $0 < a < b < c < d$ have exact representation. Which of the following two algorithms $fl(a+b+c+d)$ or $fl(d+c+b+a)$, computes more accurate result(with smaller absolute error)?
I understood the solution up to expansion $fl(a+b+c+d) $ to $a+b+c+d +(a+b)(\delta_1+\delta_1\delta_3 + \delta_1\delta_2 + \delta_1\delta_2\delta_3) + (a+b+c)(\delta_2+ \delta_2+\delta_3) + (a+b+c+d)\delta_3$
In the next step, it goes $$fl(a+b+c+d) \\= a+b+c+d +(a+b)(\delta_1+\delta_1\delta_3 + \delta_1\delta_2 + \delta_1\delta_2\delta_3) + (a+b+c)(\delta_2+ \delta_2+\delta_3) + (a+b+c+d)\delta_3 \\= a+b+c+d+(3a+3b+3c+3d)\delta$$
If someone could help me understand the steps involving this operation I would be grateful. Thanks to everyone who take their time to help