What would it be called if I took the standard deviation and divided it by the mean of the error in the measurements?
Let's say I take several measurements of objects A, B, and C. I take the standard deviation of the measurements for each A, B and C, and I get $\sigma_{A}$, $\sigma_{B}$, and $\sigma_{C}$. I also get the errors for the individual measurements (in my case based on signal to noise arguments). Let's call the average of these $\sigma_{SNR}$.
What would be the statistical significance of $\frac{\sigma_{A}}{\sigma_{SNR_A}}$?
It would seem similar to a P-$\chi^2$ in that it would identify how significant variance in the standard deviation is. Maybe some variability test? Is this even valid?
So the example I'm using is this. Suppose I'm taking radial velocity (the velocity a star is moving towards or away from us) measurements of a star over some period of time. Even if we assume the star isn't moving, due to variations in the photosphere, there will be variations in the radial velocity over time. They are generally small, but can be large. Taking the standard deviation of these measurements can give us an idea of how variable the radial velocity is over time. For this I use $\sigma$).
These measurements rely on taking a spectrum, which has a signal-to-noise ratio, and photon statistics can give an estimate of the error on that measurement. For this I use $\sigma_{SNR}$.
I wondering if taking the ratio of $\sigma$ to $\sigma_{SNR}$ if I can determine if the dispersion is significant, and/or if this is a statistical test that already has a name.