Say I tried to measure the length of an object with an instrument that has a resolution of of 0.1mm and got the following:
12.5,12.5,12.5,12.5,12.5,12.5,12.5,12.5, 12.5,12.5
(all in mm). The standard error I would get is 0, which would imply that I have found the population mean, which is obviously not true. Even if I replaced one of the values with 12.6 my standard error would still be a lot smaller than the resolution. The question I'm alluding to is how would I conduct my error analysis? I understand how to conduct error analysis with standard errors but I don't know how to do it with the instruments resolution. Would I just say my standard error = 0.1mm and do my usual error analysis??