This a related but separate question to: another question.
I'm trying to prove that the matrix derived, $\Sigma$ is non-negative definite, and I think knowing the question in the title will help.
Background: some lebesgue integration, but not pro at it.
We could observe that $\operatorname{Var}(X) = E(X^2) - E(X)^2$, and that the variance is always nonnegative. However, since it isn't that hard to show, it might be nice to see why that is.
Recall that $$ \operatorname{Var}(X) := E([X-E(X)]^2) = \int_{\Omega} (X-E(X))^2 \,\mathrm{d}P(x),$$ where $\Omega$ is the sample space and $P$ is the probability measure associated to the random variable $X$. Since $P$ is a nonnegative measure and $(x - E(X))^2 \ge 0$ for all $x\in \Omega$, it follows that $$ \operatorname{Var}(X) \ge 0. \tag{1}$$ We can then exploit the linearity of the integral (i.e. the linearity of expectation) in order to obtain \begin{align} E([X-E(X)]^2) &= E(X^2 - 2XE(X) + E(X)^2) \\ &= E(X^2) - 2E(X)^2 + E(X)^2 \\ &= E(X^2) - E(X)^2. \tag{2} \end{align} Combining (1) and (2), we get the desired result, namely $$ 0 \le \operatorname{Var}(X) = E([X-E(X)]^2) = E(X^2) - E(X)^2. $$ In point of fact, we really only need the second computation, as $E([X-E(X)]^2)$ is the expectation of a nonnegative random variable, and a nonnegative random variable must have nonnegative expectation. However, the connection to the variance is worth emphasizing (I can't tell you how sad it makes me when I have to mark down exam questions because students compute negative values for the variance, then give me imaginary standard deviations).