Initial note: I'm interested in the combinatorics aspect of the following problem, not how to proof the relation in general.
The idea is to show the following relationship:
$$ \begin{pmatrix}1&1 \\ 1&1\end{pmatrix}^n = 2^{n-1}{\underbrace{\begin{pmatrix}1&1 \\ 1&1\end{pmatrix}}_{\equiv M}} $$
by decomposing the matrix $M = A + B + C + D$ into the four individual matrices
$$ A = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \qquad B = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \qquad C = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \qquad D = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} $$
and noting the following relationships for matrix multiplication:
| $\ast$ | $A$ | $B$ | $C$ | $D$ |
|---|---|---|---|---|
| $\bf{A}$ | $A$ | $B$ | $0$ | $0$ |
| $\bf{B}$ | $0$ | $0$ | $A$ | $B$ |
| $\bf{C}$ | $C$ | $D$ | $0$ | $0$ |
| $\bf{D}$ | $0$ | $0$ | $C$ | $D$ |
If these matrix products were commutative, one could use the multinomial theorem in order to compute the coefficients for the resulting polynomial $(A+B+C+D)^n$. Since they are not, one needs to consider all possible combinations for the matrices $A,B,C,D$ (like for this question):
$$ \sum_{combinations} X_1X_2\dots X_n \qquad, \quad \mathrm{where} \; X_i \in \{A,B,C,D\} $$
However due to the above relations, each pair of matrices $X_iX_{i+1}$ can be reduced to either zero or another matrix of the set $\{A,B,C,D\}$ (which means the contribution of the whole expression $X_1X_2\dots X_n$ will reduce to either $0$ or $1$). So I'm wondering whether by applying this knowledge it is possible to combine the various permutations for a given term $A^{k_1}B^{k_2}C^{k_3}D^{k_4}$ in order to determine how many nonzero permutations there are. Together with the multinomial coefficients this could then be used to calculate the result.
At this point I'm struggling to find the number of nonzero permutations for a given set of powers $\{k_1,k_2,k_3,k_4 | k_i \geq 0\}$ in the expression $A^{k_1}B^{k_2}C^{k_3}D^{k_4}$. Is there are way to determine this?

One nitpick; you say that
This is not correct. Instead, the entire expression is reduced to either $0,A,B,C$ or $D$.
I think a better term is to say you are summing over all sequences of $A,B,C$ and $D$. Many of these sequences will contribute $0$ to the final summation; for example, if $AC$ occurs consecutively anywhere in the sequence, then $AC=0$, so the entire product is $0$.
In fact, for every matrix, there are exactly $2$ matrices which can follow it while allowing the entire product to be nonzero. $A$ can be followed by only $A$ or $B$, $B$ can be followed by only $C$ or $D$, etc. This means that once the first matrix in the sequence is determined, there are exactly $2^{n-1}$ ways to complete the sequence. With $4$ ways to choose the initial matrix, there are $4\cdot 2^{n-1}$ nonzero products.
Each of these products reduces down, using your multiplication table, to a single matrix, either $A,B,C$ or $D$. As long as we can show that each of these results appears equally, we would be done, since then the final result would be $$ 2^{n-1}\cdot (A+B+C+D)=2^{n-1}M. $$ This equi-distribution is not hard to prove if you look at things in the correct way. Let $N_A,N_B,N_C,$ and $N_D$ be the number of nonzero products which evaluate to $A,B,C$ and $D$ respectively.
We can prove $N_A=N_B$ as follows. Given a sequence which evaluates to $A$, then changing the last factor via the rule $$A\leftrightarrow B,\qquad C\leftrightarrow D$$will result in a sequence whose product is $B$. This operation is a bijection, so $N_A=N_B$.
The same bijection in the previous bullet proves $N_C=N_D$.
Finally, if you instead apply this substitution to the first element of a nonzero sequence, you can prove $N_A=N_C$ and $N_B=N_D$:$$A\leftrightarrow C,\qquad B\leftrightarrow D$$
Therefore, all of the nonzero products appear equally, and we are done.