While adding approximate numbers, our result must be as precise as the least precise number that was given to us. For instance, $101+1.001+1.0≈103$ because the number with the least precision viz., $101$, is correct to the nearest unit.
But what about when evaluating errors? Suppose $2/3$ is approximated to $0.6667$. Then the absolute error would be $|2/3-0.6667|$. But this value is $0.00003333333...$, which, according to our rules, we must approximate to $0.0000$ (as precise as $0.6667$).
This is certainly not right.
By precision rule $0.0000 \approx 0$ up to the 4th decimal place. However $0.0000 \ne 0$ since we don't know (or can't measure) the following digit
The best you can say is
$$ 0 < |2/3 - 0.6667| < 0.0001 $$
Or the error is somewhere on the order of $10^{-5}$