A tensor with two indices can be represented by a $3\times3$ matrix.
\begin{equation} A= \left( \begin{array}{ccc} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \end{array} \right) \end{equation}
It make sense why is a $2$D matrix but why it's a $3\times3$ and not a $2\times2$, or $4\times4$?
A rank-$n$ tensor is a linear map from a sequence of vector spaces to the reals(/complexes, but let's keep things simple). So it eats vectors, which are normally all in $d$ dimensions. Therefore a rank-$2$ tensor can be written as a two-index object $a_{ij}$, which acts linearly on $d$-dimensional vectors $v_i,w_j$ as $$ \sum_{i=1}^d \sum_{j=1}^d a_{ij} v_i w_j. $$ But this is the same as the summation from the matrix product $v^T A w$, so $A$ is a $d \times d$ matrix.