Suppose Y=X$\beta$+$\epsilon$, where $\epsilon\sim N_{n}(0,\sigma^2I)$, is a linear model. Now, I am assuming that in the data point($x_{i},Y_{i})$, $x_{i}$ is some constant and not a random variable. $Y_{i}$ is the i-th element of the vector $Y$ and is a random variable. It is easy to find the distribution of the vector $Y$ and $\hat{\beta}$, where $\hat{\beta}$ is the estimator vector of $\beta$. $\beta$ itself is not random but $\hat{\beta}$ is as it is an estimator. Also, I am letting $\hat{Y}=X\hat{\beta}$ i.e. $\hat{Y}$ is the predicted value of the vector $Y$ . I want to know the answers to the following questions:
i) Are $Y_{i}$ and $\hat{Y_{i}}$ independent random variables?
ii) Are $Y_{i}$ and $e_{i}$ independent random variables? Here $e_{i}=Y_{i}-\hat{Y_{i}}$.
iii) Are $\hat{Y_{i}}$ and $e_{i}$ independent random variables?
Hint for (i): If $X=I$, then $\hat{Y} = \hat{\beta} = Y$.
Hint for (ii): Let $n=2$ and $X=\begin{bmatrix} 1 \\ 0\end{bmatrix}$. Then $\hat{Y} = \begin{bmatrix}Y_1 \\ 0 \end{bmatrix}$ so $e = \begin{bmatrix} 0 \\ Y_2 \end{bmatrix}$.
(iii):
I will assume $X$ has linearly independent columns so that $X^\top X$ is invertible. Then $\hat{\beta}$ has the explicit formula $\hat{\beta} = (X^\top X)^{-1} X^\top Y = \beta + (X^\top X)^{-1} X^\top \epsilon$, and thus $$\hat{Y} = X\beta + X(X^\top X)^{-1} X^\top \epsilon$$ and $$e = (I-X(X^\top X)^{-1} X^\top) \epsilon.$$
$\hat{Y}$ and $e$ are jointly Gaussian (since they are linear transformations of a Gaussian vector $\epsilon$), so "$\hat{Y}$ and $e$ are independent" is equivalent to "$\text{Cov}(\hat{Y}, e) = 0$." (See here and here.)
For shorthand let $H=X(X^\top X)^{-1} X^\top$ so that $\hat{Y} = X\beta + H\epsilon$ and $e=(I-H)\epsilon$. We have \begin{align} \text{Cov}(\hat{Y}, e) &= \text{Cov}(X\beta + H\epsilon, (I-H)\epsilon) \\ &= \text{Cov}(H\epsilon, (I-H)\epsilon) \\ &= E[(H\epsilon - E[H\epsilon])((I-H)\epsilon - E[(I-H)\epsilon])^\top] & \text{defn. of covariance} \\ &= H E[(\epsilon - E[\epsilon])(\epsilon - E[\epsilon])^\top] (I-H) \\ &= H (\sigma^2 I)(I-H). \end{align} Can you finish from here?