I am familiar with the expression for the eigendecomposition of a matrix $$ A = Q\Lambda Q^{-1} $$ where $Q$ is a matrix whose columns are the eigenvectors of $A$ and $\Lambda$ is a diagonal matrix with the $i^{\text{th}}$ diagonal element being the eigenvalue corresponding to the eigenvector in the $i^{\text{th}}$ column of $Q$.
I have also seen the eigendecomposition expressed this way: $$ A = \sum\limits_{i}\lambda_i\mathbf{x_i}\mathbf{x_i}^T $$ where each vector $\mathbf{x_i}$ is the $i^{\text{th}}$ eigenvector. However, I am having some difficulty visualizing why this summation is equivalent to the matrix multiplication expression above.
There is only a correspondence when $Q$ is orthogonal, i.e. $Q^{-1} = Q^\top$. (This is possible if $A$ is symmetric.)
Then with $x_i$ being the $i$th column of $Q$, the two expressions are equal. I think if you write $\Lambda$ as the sum of $n$ matrices, each diagonal with one nonzero entry $\lambda_i$ in the $i$th diagonal entry, then you can see the correspondence.