Every book on numerical methods studies the precision of the algorithms as if they will be executed on a machine with infinite precision. Apparently, the effects of using floating point arithmetic (with finite representation) are not very important since they are neglected in the textbooks. However, I guess that these effects must have been studied somewhere. Could you please provide me with a reference?
Updated. Changed the wording of the question as suggested by NoChance.
Numerical recipes by Press, Teukolsky and two others does do the analysis including the effects of machine-epsilon rounding.