Consider $H$ denotes hat matrix and $e$ denotes residual.
In the book Applied regression Analysis by Draper/Smith, it is written that :
$\mathbb V(e_i)$ is given by the $i$th diagonal element $(1-h_{ii})$, and $\mathbb cov(e_i,e_j)$ is given by the $(i,j)$th element $(-h_{ij})$ of the matrix $(I-H)\sigma^2$
But my contradiction is arising here that :
$\mathbb V(e_i)$ is given by the $i$th diagonal element $(1-h_{ii})\sigma^2$ of the matrix $(I-H)\sigma^2$
and
$\mathbb cov(e_i,e_j)$ is given by the $(i,j)$th element $(-h_{ij})\sigma^2$ of the matrix $(I-H)\sigma^2$
Can you please explain me why is the contradiction arising ?
The presence of $\sigma^2$ in your formulae is correct. I suspect a normalization $\sigma = 1$: this is a point to check in the reference-unfortunately I have no access to it-. In any case I would define some context. Let $Y=X\theta+\epsilon$ be a regression line with
$$E[\epsilon]=0,~~\operatorname{Cov}(\epsilon)=E[\epsilon\epsilon^T]:=\sigma^2 I_n,$$
where $I_n$ denotes the identity matrix in $n$-dimension and $\operatorname{Cov}$ the covariance matrix. Using LS estimation, one arrives at the estimator $\hat \theta$ for the regression coefficients $\theta$, with
$$\hat \theta = (X^T X)^{-1}X^T Y, $$ and $$\hat Y = X\hat \theta:=PY.$$ The residuals $R$ are then
$$R:= Y-\hat Y =(1-P)Y,$$
with $E[R]=E[Y]-E[\hat Y]= 0$ and
$$\operatorname{Cov}(R)=(1-P)\operatorname{Cov}(Y)(1-P)^T = (1-P)\sigma^2 I_n (1-P)^T=\sigma^2(1-P) $$
as $(1-P)$ is nilpotent due to nilpotency of $P$, and $(1-P)^T=(1-P)$. For additional details I refer to this thread and this wiki.