I understand that the Cramer Rao Lower Bound of a parameter is the lower bound of the variance on that parameter. The CRLB can be directly obtained if the probability distribution and the model are known. However, if the probability distribution function and the model are known can we not just simply calculate the standard deviation itself from the typical equation $V(x) = \sum_{i}(x_i-\hat{x})^2p(x_i)$ ?
In short, what is the purpose of CRLB when it seems like one can calculate the standard deviation directly any time one can calculate the CRLB? What am I missing?
It might be useful to consider the following:
In a more typical example, you might a assume the form of the probability density function, but not know the values of the parameters which define it. For example, a univariate Gaussian distribution has two parameters, mean and variance, $\mu$ and $\sigma^{2}$. These are fixed but unknown. You then would use "estimators" to compute them from the data.
For example, an estimator for the variance might be $$ S^{2}=\frac{1}{N-1}\sum_{i=1}^{N}\left(x_{i}-\hat{\mu}\right)^{2} $$
where $x_{i}$ is a data point, $\hat{\mu}$ is a (separate) estimate of the mean of the distribution, $N$ is the number of data points. (Please note that the probability density function itself, $p\left(x\right)$, is not explicitly part of the estimator. If you already knew the exact value of the parameter(s) which defines $p\left(x\right)$, you would not need to be estimating it from the data.)
You then would be interested in questions such as the performance of such an estimator using theoretical constructs such as the CRLB as a benchmark to compare to. There is some additional discussion examples on wiki that might be helpful to illustrate concretely: https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Rao_bound#Normal_variance_with_known_mean
I hope this helps.