I understand the formula for the margin of error is z times standard deviation / square root n.
My question is why is this? I understand that the standard error is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. But why is this multiplied by the standard error (standard deviations divided by Square root n)?