[NOTE]: I am relatively new to statistics and attempting to gain a basic ground on the subject.
For my question, I was curious how one could calculate the range for an error bar if the data point in the average of three separate points that have a 10% range of variation for each point.
The basic premise behind this is that I have a Geiger counter with an accuracy range of 10% (could be 10% more or less than what it reads). However, after checking the dose of the same source on 3 separate occasions, the results have all been relatively similar.
That will depend on your error. If there is a correlation between errors from measurement to measurement (eg as a result of poor calibration) then there may potentially be no reduction at all in an error bar at a point from multiple readings. This is actually quite common.
If your errors are completely uncorrelated than you are able to gain more information by repeating readings. The standard deviation will go as $\sqrt N$ if you assume a Normal distribution and something not quite as good otherwise.