I know that the outer product of every two eigenvector forms a 2-D basis for the 2-D matrices. For example, when we write a matrix based on its eigenvectos, we have:
$$ X = \sum_{i,j} \lambda_{i,j}u_iu_j^T $$
where $\lambda_{i,j}$ is equal to zero when $i\neq j$ and is eigenvalue otherwise. But what is the intuition behind the basis? Why in the eigen-decomposition the coefficient of cross eigenvectors are zero? ${}$
Before I launch into an explanation, I want to make a correction to your question. The main issue is that the matrix $X$ needs to be diagonalizable for its eigenvectors to form a basis. Let $V$ be vector space on which $X$ acts, and let $M(V)$ be the vector space of matrices acting on $V$.
If $X$ is diagonalizable, then, using eigenvalues as a basis, it is just diagonal. We have $$ X = \begin{bmatrix} \lambda_{1,1} & & & \\ & \lambda_{2,2} & & \\ & & \ddots & \\ & & & \lambda_{n,n} \\ \end{bmatrix} $$ where our basis consists of eigenvectors $$ u_i = \begin{bmatrix} 0 \\ \vdots \\ 1 \\ \vdots \\ 0 \end{bmatrix} $$ Using matrix multiplication, we can verify that these are eigenvectors: $X u_i = \lambda_{i,i} u_i$.
Then again using matrix multiplication we have $$ u_i u_j^T = A_{i, j} := \begin{array}{c c} & \begin{array}{c c c c c} & & j & & \end{array} \\ \begin{array}{c} \vphantom{0} \\ \vphantom{\ddots} \\ i \\ \vphantom{\ddots} \\ \vphantom{0} \end{array} & \left[ \begin{array}{c c c c c} 0 & \cdots & 0 & \cdots & 0 \\ \vdots & \ddots & \vdots & & \vdots \\ 0 & \cdots & 1 & \cdots & 0 \\ \vdots & & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 & \cdots & 0 \end{array} \right] \end{array} $$ If we'd rather not lean on matrix multiplication rules for this result, we have (and should have) that the product $u_i u_j^T$ produces the matrix $A_{i,j}$ for which $$ A_{i,j} u_k = u_i u_j^T u_k = u_i (u_j \cdot u_k) = \begin{cases} 0 & k \ne j \\ u_i & k = j \end{cases} $$
Then viewing matrices as vectors with $n^2$ components, $A_{i,j}$ forms a basis of $M(V)$ because each pair $(i,j)$ refers to a specific component. Writing $X$ in terms of this basis, we have $$ X = \sum_{i,j} \mu_{i,j} A_{i, j}. $$ And since $X$ is a diagonal matrix, all of its components are $0$ off the diagonal—exactly when $i \ne j$. So we can say $\mu_{i, j} = 0$ for all $i \ne j$. And if $i = j$, then we can take $\mu_{i, i}$ to be exactly the eigenvalue $\lambda_{i,i}$, since that is the coefficient in the $(i, i)$-position of the matrix. So we have the result, and it is all a result of using the correct basis.
So why do we still have a basis of $M(V)$ of the form $\{u_i u_j^T\}$ when $X$ is written with respect to an arbitrary basis? Suppose as above that $X$ is diagonal, but write $Y = PXP^{-1}$ for an arbitrary invertible matrix $P$. Any diagonalizable $Y$ can be written in this way. Then let $v_i = Pu_i$. It will be an eigenvector for $Y$ with the same eigenvalue, $\lambda_{i,i}$: $$ Y v_i = (PXP^{-1})(Pu_i) = PXu_i = P\lambda_{i,i}u_i = \lambda_{i,i}v_i $$
Since $P$ is invertible, $\{v_i\}$ is a basis for $V$. And we have $$ v_i v_j^T = Pu_i(Pu_j)^T = Pu_iu_j^TP^T = P A_{i,j} P^T. $$
Both the multiplication on the left by $P$ and the multiplication on the right by $P^T$ preserve the property of $\{A_{i,j}\}$ being a basis for $M(V)$, because they are actually invertible linear transformations on $M(V)$. Indeed, $P(\lambda A + \mu B) = \lambda PA + \mu PB$, and $(\lambda A + \mu B)P^T = \lambda AP^T + \mu BP^T$, so both multiplications are linear transformations. And invertibility of $P$ (hence invertibility of $P^T$) implies that $PA \ne 0$ (and $AP^T \ne 0$) for all $A \ne 0$, so neither multiplication has a nontrivial kernel. They preserve dimension, so they are thus invertible. Therefore $\{A_{i,j}\}$ is a basis for $M(V)$ if and only if $\{P A_{i,j} P^T\} = \{v_i v_j^T\}$ is.
And as a final note, if $X$ is not diagonalizable, then there is no basis of eigenvectors. There are simply not enough linearly independent eigenvectors for their outer products to form a basis of $M(V)$, so the whole thing falls apart.