I've seen several places a matrix being put inside another matrix, but can not make sense of the notation. Google left me with no results. See screenshot below. Here two identity matrices are put inside another matrix. How does this matrix look like? Are there matrices inside another matrix, or are the matrices expanded somehow? Also the result of this matrix multiplication of this special matrix is some kind of condition, as the end result is used to define a set.

What does it mean to have a matrix inside a matrix?
1.9k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
I call this "block notation". For example, if $$ A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \quad \text{and} \quad B = \begin{bmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \\ \end{bmatrix} $$ then $\begin{bmatrix} A & 0 \\ 0 & B \end{bmatrix}$ is just a concise notation for the matrix $$ \begin{bmatrix} a_{11} & a_{12} & 0 & 0 & 0 \\ a_{21} & a_{22} & 0 & 0 & 0 \\ 0 & 0 & b_{11} & b_{12} & b_{13} \\ 0 & 0 & b_{21} & b_{22} & b_{23} \\ 0 & 0 & b_{31} & b_{32} & b_{33} \\ \end{bmatrix}. $$ Personally, I prefer not to call this matrix a "block matrix" because it is just a particular matrix which in this example has 5 rows and 5 columns. I would rather say that the matrix has been written using "block notation".
Those are $n \times n$ identity matrix blocks. For example, with $n=3$, $$ \left[ \begin{array}{c|c} I_3 & 0 \\ \hline 0 & I_3 \end{array} \right] = \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ \hline 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right], $$ which is essentially the same as the $2n \times 2n$ identity matrix.
More generally, suppose that a vector space $V$ is a direct sum of $k$ subspaces of total dimension $n$, i.e., $$ V = V_1 \oplus \cdots \oplus V_k, $$ where the dimension of each summand is $n_i = \dim V_i$, so $$ n = n_1 + \cdots + n_k. $$ Moreover, choose a basis $\mathcal{B}_i$ for each subspace $V_i$, so that $$ \mathcal{B} = \mathcal{B}_1 \cup \cdots \cup \mathcal{B}_k $$ is a basis of $V$. Then, a linear transformation $T:V \to V$ is represented by an $n \times n$ matrix, or equivalently a $k \times k$ block matrix, where the $(i,j)$ block is an $n_i \times n_j$ matrix representing the linear transformation $T_{ij}:V_j \to V_i$.
The linear algebra formulas on the blocks behave as if the blocks were just scalar entries with one major caveat: the blocks are themselves matrices, so they don’t necessarily commute. You can read more about block matrices here.