I'm reading this paper which contains the following theorem.
Theorem 2. The optimal weighted average of a collection of signals $s=\left(s_1,\ldots,s_M\right)$ with covariance matrix $\Sigma$, is the solution to the following system of linear equalities: $$\begin{pmatrix}\Sigma+\delta\delta'&u\\u'&0\end{pmatrix}\begin{pmatrix}w\\\lambda\end{pmatrix}=\begin{pmatrix}\sigma_s,V\\1\end{pmatrix}$$ where $u$ is a $M\times1$ vector of ones, $\delta$ is the $M\times1$ vector of signal biases, $\sigma_s,V$ is the $M\times1$ vector of covariances between each signal and the criterion variable $V$, $w$ is the $M\times1$ vector of weights constrained to sum 1, and $\lambda$ is a real-valued unknown variable. By solving the above system for $w$, we obtain the optimal set of weights $w^*$.
Looking at the the right, I understand that there is a matrix involving $\sigma_s,V$ and $1$. However, it seems it can't be a $2\times1$ matrix because $\sigma_s,V$ refers to a vector of covariances. So let's say there are 6 covariances in the vector $\sigma_s,V$. How then can there be a vector of 6 covariances in the top row of the matrix, and then a single 1 in the bottom row?
Analogously, I wonder what the 0 on the left means.
So, you need to know about matrix multiplication here. That matrix equation is very useful shorthand for: $$ (\Sigma + \delta\delta')w +u\lambda = \sigma_{s,V} \\ u'w = 1 $$ of course I left out the term $0\lambda$, since it is zero. Also note that the prime ${}'$ is used for transpose of a matrix. Thus for example: $\delta$ is $M \times 1$ and $\delta'$ is $1 \times M$, so the product $\delta \delta'$ is $M \times M$.