I have seen in the text books that sometimes while calculating the Variance of a set of numbers
$Var(X)$ = $\sum_{i}^{n} \frac {(x_{i}-\mu)^2}{n}$
also
$Var(X)$ = $\sum_{i}^{n} \frac {(x_{i}-\mu)^2}{n-1}$
In which cases each of the above formulas should be used and why? Also
(Standard Deviation) $\sigma = \sqrt{Var(X)}$
So while calculating the variance which of the above Variance definition should we use and why?
If you are given a sample of size $n$, and you don't know the mean of the underlying distribution, then you should use the version of estimated variance with $n-1$. This slick trick (replacing $n$ with $n-1$ when computing estimated variance) makes the empirical estimator of variance what is called "unbiased", meaning that the expected value of the variance estimator is equal to the true variance value. Otherwise you would tend to under-estimate variance, e.g. have negative bias, if you used $n$ instead of $n-1$. If you DO know the theoretical mean of your distribution, then you should use that instead of the empirical sample mean, and use the formula of variance that has $n$ instead of $n-1$. It all comes down to whether you know the mean of your distribution, or have to estimate it from samples.