A police officer uses a radar gun to determine the speed of five vehicles traveling on the highway. The speed of those vehicles are as follows.
65, 55, 60, 66, 69
Find the standard deviation round to three decimal places. I got the answer: 4.940
Not sure where the mistake is.
Thank you

Comments on the denominator of the sample variance.
The usual (but not only) definition of the sample variance is $S^2 = \frac{1}{n-1}\sum_{i=1}^n (X_i - \bar X)^2.$ The popularity of this definition probably arises from several theoretical considerations:
(1) $S^2$ is an unbiased estimator of the population variance $\sigma^2$ for most of the distributions used as population models, including the normal distribution. That is $E(S^2) = \sigma^2.$
(2) In the common case where the population is normally distributed, the distribution of $S^2$ is closely related to the distribution $\mathsf{Chisq}(n-1)$ [the chi-squared distribution with $n-1$ degrees of freedom]. In particular, $$\frac{(n-1)S^2}{\sigma^2} \sim \mathsf{Chisq}(n-1).$$ This makes it easy to make a 95% confidence interval for $\sigma^2$ of the form $\left(\frac{(n-1)S^2}{U},\,\frac{(n-1)S^2}{L}\right),$ where $L$ and $U$ cut 2.5% of the probability from the lower and upper tails of the distribution $\mathsf{Chisq}(n-1),$ respectively. The same chi-squared distribution is used in conjunction with $S^2$ to test hypotheses about the population variance $\sigma^2.$
(3) In some applications it is convenient to use the sample standard deviation $S$ (perhaps mainly because the units of $S$ are the same as the units of the data). While $S$ is not an unbiased estimator of the population standard deviation $\sigma,$ (that is, $E(S) \ne \sigma)$ the difference between $E(S)$ and $\sigma$ is not large for normal data and gets smaller as the sample size $n$ increases. In many applications the difference between $E(S)$ and $\sigma$ is simply ignored.
However, a few responsible authors suggest using the estimator $V = \frac{1}{n}\sum_{i=1}^n (X_i - \bar X)^2.$ One argument for this is that students don't need to fuss over the difference in definitions between sample and population means. Another is that, even though $V$ is slightly biased, it makes up for that by having a slightly smaller variance.
More technically, the 'mean squared error' of an estimator $T$ of a parameter $\tau$ is defined as
$$MSE(T) = E[(T - \tau)^2] = Var(T) + B^2(T),$$ where $B^2 = [E(T) - \tau]^2,$ the square of the 'bias'.
For normal data $V$ has smaller MSE than does $S^2.$ Furthermore, the estimator $V^\prime = \frac{1}{n+1}\sum_{i=1}^n (X_i - \bar X)^2$ has a slightly smaller MSE than either $V$ or $S^2.$ While I know of nobody who is proposing to use $V^\prime$ as an estimator of $\sigma^2,$ I suppose a case could be made.
Even though it would not be correct to insist that $S^2$ is the correct estimator of $\sigma^2,$ it seems to be established practice to use $S^2.$ All estimators have advantages and disadvantages, and established habit die hard, so I don't suppose we will see a massive campaign to use another estimator anytime soon.
Suppose the experiment is to sample $n=5$ values from $\mathsf{Norm}(\mu=100,\,\sigma=15).$ Then $\frac{4S^2}{225} \sim \mathsf{Chisq}(4),$ so one can show that $E(S^2)=225$ and $Var(S^2)=MSE(S^2)=25,312.$ Similarly, $\frac{5V}{225} \sim \mathsf{Chisq}(4),$ so that $E(V) = 180,\,B(V) = -45,$ $Var(V)=16,200,$ $B(V)^2 = 2025,$ and $MSE(V) = 18,225.$
Notice that $S^2$ is unbiased, but has $MSE(S^2)=25,312 > MSE(V) = 18,225.$
The histograms below are based on a million realizations of $S^2$ (top) and $V.$ Means are indicated by vertical red lines. Superimposed curves are appropriately scaled densities of $\mathsf{Chisq}(4).$