I quote the wikipedia page:
Throughout this article, boldfaced unsubscripted $\textbf{X}$ and $\textbf{Y}$ are used to refer to random vectors, and unboldfaced subscripted $X_i$ and $Y_i$ are used to refer to random scalars.
If the entries in the column vector $$\textbf{X} = (X_1, ... , X_n)$$ are random variables, each with finite variance, then the covariance matrix $C$ is the matrix whose $(i, j)$ entry is the covariance $$C_{ij}=\text{cov}(X_i, X_j)=\text{E}[(X_i-\mu_i)(X_j-\mu_j)],$$ where $\mu_i$ is the expected value of the $i$th entry in the vector, $\textbf{X}$.
If we consider $\textbf{X}$ as a vector of measured values, such that we know $X_1, X_2, ...$ with some inevitable uncertainty, how can we express the covariance of the data? Because $$\mu_i=E(X_i)=X_i,$$ since we know $X_i$, so that means that $C_{ij}=0 \forall i,j.$
Do I have some naive misunderstanding of covariance matrices?