Iam trying to understand how the variance of the OLS estimator is calculated.
here is what i have: $E[\hat{ \beta} \mid X] = \beta$ and $V(\hat{\beta} \mid X) = \sigma^2(X^TX)^{-1}$ where $\hat{\beta}$ is the OLS estimator of $\beta$, (these things were derived in previous calculation in my book)
here is what i don't get: "using the decomposition of variance se Assumption 2.4"
$V\big(\hat{\beta} \big) = E\big[ V\big(\hat{\beta} \mid X \big) \big] + Var\big[ E\big(\hat{\beta} \mid X \big) \big] = V\big(\hat{\beta} \mid X \big) + V(\beta)$
i don't get how $E\big[ V\big(\hat{\beta} \mid X \big) \big] = V\big(\hat{\beta} \mid X \big)$ by using assumption 2.4 ?? i can see that it should be:
$E \big[V(\hat{\beta} \mid X)\big ] = E[\sigma^2(X^TX)^{-1}]$
The assumptiom 2.4 (Homoscedasticity): the variance of the error term is the same regardless of the explanatory variables: $V(\epsilon_j \mid X._j) = \sigma^2$
The result that
$$V\big(\hat{\beta} \big) = V\big(\hat{\beta} \mid X \big),\;\;\text{??}$$
(because the variance of $\beta$ is zero, $\beta$ being a vector of constants), would hold only if the regressor matrix was considered deterministic -but in which case, conditioning on a deterministic matrix is essentially meaningless, or at least, useless. The correct general expression, as you write, is (using the decomposition of variance formula and the conditional homoskedasticity assumption)
$$V\big(\hat{\beta} \big) = E \big[V(\hat{\beta} \mid X)\big ] = \sigma^2E[(X^TX)^{-1}]$$
If the regressor matrix is considered deterministic, its expected value equals itself, and then you get the result which troubled you. But note that a deterministic regressor matrix is not consistent with the assumption of an identically distributed sample because here one has unconditional expected value
$$E(y_i) = E(\mathbf x'_i\beta) = \mathbf x'_i\beta$$ and so the dependent variable has a different unconditional expected value in each observation.