I am attempting to find the uncertainty of the standard deviation. In the experiment I performed, I took 50 trials where I recorded the disintegrations over a period of time as counts. Now, I want to calculate the standard deviation, along with its uncertainty, of the counts. I have been told that for counting statistics, the uncertainty of the any given count trial is $\sigma _{N}={\sqrt{N}}$ where N is the number of counts. I denote the standard deviation as $s$ and $\sigma$ is the uncertainty. I have the formula
$s{^{2}}=\frac{1}{n}\sum (N_{i}-\bar{N})^2\Rightarrow s=\left [ \frac{1}{n}\sum N_{i}^2-\bar{N} \right ]^{1/2}$
Next, I want to find the uncertainty. I come to the following steps:
$\sigma _{s}=\left [ \sum \left ( \sigma _{N_{i}}^2 \left ( \frac{\partial s}{\partial N_{i}} \right )^2 \right ) +\sigma _{\bar{N}}^2 \left ( \frac{\partial s}{\partial \bar{N}} \right )^2 \right ]^{1/2}$
$\quad=\left [ \sum \left ( \sigma _{N_{i}}^2 \frac{N_{i}^2}{n\left ( \sum N_{i}^2-n\bar{N} \right )} \right ) +\sigma _{\bar{N}}^2 \left ( \frac{n\bar{N}^2}{ \sum N_{i}^2-n\bar{N}^2 } \right ) \right ]^{1/2}$
$\quad=\left [ \sum \left ( \frac{\sigma _{N_{i}}^2}{n^2} \frac{N_{i}^2}{\frac{1}{n} \sum N_{i}^2-\bar{N} } \right ) +\sigma _{\bar{N}}^2 \left ( \frac{\bar{N}^2}{\frac{1}{n} \sum N_{i}^2-\bar{N}^2 } \right ) \right ]^{1/2}$
$\quad=\left [ \sum \left ( \frac{\sigma _{N_{i}}^2}{n^2s^2} N_{i}^2 \right ) +\frac{\sigma _{\bar{N}}^2}{s^2} \bar{N}^2 \right ]^{1/2}$
At this point, I note that $\sigma_{N_{i}}^2=N_{i}$, and $\bar{N}=\frac{1}{n} \sum N_{i} $, so
$\sigma_{\bar{N}}^2=\sum\left ( \sigma _{N_{i}}^2\left ( \frac{\partial \bar{N}}{\partial N_{i}} \right ) ^2\right )=\sum \frac{\sigma _{N_{i}}^2}{n^2}=\frac{\bar{N}}{n}$, so
$\sigma _{s}=\left [ \sum \left ( \frac{N_{i}^3}{n^2s^2} \right ) +\frac{\bar{N}^3}{ns^2} \right ]^{1/2}=\frac{1}{s}\left [ \frac{1}{n^2} \sum N_{i}^3+\frac{\bar{N}^3}{n} \right ]^{1/2}$
The problem with this is the size of my uncertainty. For my data, I got $s=57.028\pm 435.519$. Clearly, this does not seem correct (my data is not this spread out). If you need my specific data, I can post it. Anyway, I am curious where I went wrong with my derivation. Any help would be greatly appreciated!
EDIT: To clarify what the purpose of this is, I know that for an infinite number of data points, the standard deviation should converge to $\sqrt{\bar{N}}$. I want to see if this theoretical standard deviation is within my calculated standard deviation of $s\pm\sigma_{s}$.
I have figured out where I went wrong. I treated $\bar{N}$ as independent from $N_{i}$. Also, I did not treat it as a population sample. After I corrected for these, I got a reasonable answer. I am not going to derive it, but I will put it up in case it benefits anybody in the future.
In general: $\sigma_{s}=\frac{1}{\left ( n-1 \right )s}\left [ \sum \sigma_{N_{i}}^2\left ( N_{i}-\bar{N} \right )^2 \right ]^{1/2}$
In the case of a counting experiment: $\sigma_{s}=\frac{1}{\left ( n-1 \right )s}\left [ \sum N_{i}\left ( N_{i}-\bar{N} \right )^2 \right ]^{1/2}$