I have simulated a set of probabilities $\{p_n\}$ by the hit-or-miss Monte Carlo method. Specifically, my program checks whether or not an $n\times n$ matrix has a certain feature or not; say that, out of $N$ trials, $k_n$ have this feature (success). Then $p_n=k_n/N.$
Question: Now, I want to find the uncertainty (or standard error) $\sigma_{p_n}$ on $p_n$. I'd like to do this so that I can weigh my probabilities when fitting these probabilities as a function of $n.$ My weights are defined as $w_n=\sigma_{p_n}^{-2}.$ This definition seems to be completely standard, but it doesn't go well with my intuition, as explained below. I'd like to know if my intuition is simply wrong (and why), or if there are more suitable weights I could be using.
My problem: As far as I understand, $k_n$ is distributed binomially, so $$\sigma_{p_n}=N^{-1/2}\sqrt{p_n(1-p_n)}$$
This means that a $p_n,$ smaller than some $p_m\leq1/2,$ has a smaller uncertainty than $p_m.$ This makes sense to me (a random walk will not go very far from its starting point if it has a very low probability of taking a step). However, it seems very counter-intuitive to me to weigh a very low probability $p_n$ (where my program perhaps only had $10^2$ successes out of $10^9$ trials) higher than a high probability $p_m$ (which had success, say, half of the time). Using a "relative fluctuation" away from some $p_n$, defined as $$\sigma_{p_n}/p_n=N^{-1/2}\sqrt{\frac{1}{p_n}-1},$$ as my effective uncertainty seems closer to what my intuition tells me is important when choosing my weights.
I'm having a hard time formulating why this is, but it is like this: Flip a coin $100$ times: It comes up heads $54$ times. Flip another coin $100$ times: It comes up heads a single time. Now estimate the bias of each coin. Which estimate are you most confident in? I would say that I'm much more confident in my estimate for the first coin. Should I be? This is the essence of my question.