I have two instruments, call them A and B, each of which records the current time each time the signal send by a transmitter, T, is received. The transmitter emits a signal nearly (but not every) second, over some time interval. Unfortunately, I have no record of how many signals were actually sent nor when each signal was sent. I have only the data from A and B.
We will assume that instrument A received "perfectly" (i.e. the number of signals sent will be taken to be number which were recorded by A, with no uncertainty). The goal is to then estimate the "efficiency" of instrument B, which is given by the percentage of pulses sent which B receives (so, our estimate will be the number received by B divided by the number received by A).
I want to look at the "efficiency" of B as a function of time, so I bin the data in 10-second intervals, and plot the efficiency over each interval on a graph. I now want to compute the uncertainty for each "bin"/point (i.e. add "error bars" to the graph). How can this be done?
I recall the formula being something like $1\over \sqrt{n}$ (where $n$ is the number of points in each bin), or perhaps it was $\frac{\sqrt{N - D}}{N}$ (where $N$ is the numerator (i.e. the number of signals received by B) and $D$ is the denominator (i.e. the number of signals received by A)), however I'm not sure.