I have a question regarding standard deviation. Let me start with an example: I have response times (RT) from users, let's say
RT1 = 3s
RT2 = 5s
RT3 = 8s
I have a normalizing constant for the response time (what the response time usually should be). Let's say this constant is 4s. So we have the normalized response times:
RT1 = 3s / 4
RT2 = 5s / 4
RT3 = 8s / 4
Now I would like to calculate the standard deviation. For the standard deviation I also have a constant indicating what the standard deviation typically is.
Should I now use the normalized response times or the original response times to calculate the standard deviation and then dividing by the normalization constant for the standard deviation?
Standard deviation scales under multiplication by a constant: if $X$ has standard deviation $\sigma$, then $cX$ has standard deviation $|c| \sigma$. So it doesn't matter whether you first calculate the standard deviation and then normalize, or first normalize and then calculate the standard deviation.