I'm reading a tutorial on stochastic processes. There is an example in the tutorial as follows:
General Moving Average random process given as $X[n]=\frac{(U[n]+U[n-1])}{2}$ where $E[U[n]]=\mu$ and $var(U[n]) = {σ^2}_U$ and the $U[n]$'s are uncorrelated.
As you see $X[n]$ is a 1-D random variable Then the example is solved in the following way:
$[C_X]_{ij}=E[(X[i]-E[X[i]])(X[j]-E[X[j]])]\qquad i=0,1,\dots,N-1;j=0,1,\dots,N-1.$
$\begin{align} X[n]-E[X[n]]&=\frac{1}{2}(U[n]+U[n-1])-\frac{1}{2}(\mu+\mu)\\ &=\frac{1}{2}[(U[n]-\mu)+(U[n-1]-\mu)]\\ &=\frac{1}{2}[\overline U[n]+\overline U[n-1]] \end{align}$
$\begin{align} [C_X]_{ij}&=\frac{1}{4}E[(\overline U[i]+\overline U[i-1])(\overline U[j]+\overline U[j-1])]\\ &=\frac{1}{4}(E[\overline U[i]\overline U[j]]+E[\overline U[i]\overline U[j-1]]+E[\overline U[i-1]\overline U[j]]+E[\overline U[i-1]\overline U[j-1]]) \end{align}$
$[C_X]_{ij}=\frac{1}{4}(\sigma^2_U\delta[j-i]+\sigma^2_U\delta[j-i-1]+\sigma^2_U\delta[j-i+1]+\sigma^2_U\delta[j-i]).$
$C_X=\begin{bmatrix}\frac{\sigma^2_U}{2}&\frac{\sigma^2_U}{2}&0&0&\dots & 0&0&0\\ \frac{\sigma^2_U}{4}&\frac{\sigma^2_U}{2}&\frac{\sigma^2_U}{4}&0&\dots &0&0&0\\ \vdots &\vdots &\vdots& \vdots&\vdots &\vdots &\vdots& \vdots\\ 0&0&0&0&\cdots &\frac{\sigma^2_U}{4}&\frac{\sigma^2_U}{2}&\frac{\sigma^2_U}{4}\\ 0&0&0&0&\cdots &0&\frac{\sigma^2_U}{4}&\frac{\sigma^2_U}{2}\end{bmatrix}$
So why $C_X$ is $n\times n$ inspite of $X[n]$ being 1-dimensional?
First, your random process $X[n], n \in \mathbb{N}$ is collection of infinitely many random variables $X[1],X[2],\ldots$; if we take finitely many of them, say, $X[1],\ldots,X[n]$ and form random vector $$X=(X[1],\ldots,X[n])$$ this random vector is sometimes called a finite-dimensional section of random process $X[n], n \in \mathbb{N}$.
With every random vector $Y=(Y_1,\ldots,Y_n)$ we can associate its covariance matrix $C_Y=\{c_{ij}\}_{i,j \in \{1,..,n\}}$ with entries $$c_{ij}=Cov(Y_i,Y_j)=E[(Y_i-E[Y_i])(Y_j-E[Y_j])].$$ In your tutorial, $C_X$ is covariance matrix of finite-dimensional section $X$ of random process $X[n], n \in \mathbb{N}$ introduced in the previous paragraph.
When $Y$ is not a random vector, but a proper stochastic process, in place of covariance matrix we use auto-covariance function $$C_Y(t_1,t_2)=Cov(Y_{t_1},Y_{t_2})=E[(Y_{t_1}-E[Y_{t_1}])(Y_{t_2}-E[Y_{t_2}])]$$ Of course, when process $Y$ is WSS, auto-covariance function $C_Y(t_1,t_2)$ depends on $(t_1,t_2)$ only through the lag $h=t_2-t_1$, and we can write auto-covariance function as $$C_Y(h)=Cov(Y_0,Y_h).$$