So I have a list of data points each with their own error (eg. 3 +/- 0.2 units) and I want to determine which of the data points are within 3 sigma. To do this do I just calculate the standard deviation of all my points and use that as sigma? If so how does the error of the individual values come into play?
My goal is to use 3 sigma as a cutoff for whether or not the data point signifies a detection. If I understand this right this just means my value should be less than 3 times the standard deviation. But this method does not rely at all on the individual error values which - to my naive instance - I feel like it should.
Sorry if this is overly simple, but I think the names of things are tripping me up. Thanks in advance for any help!!