Let $P = [p_{ij}]$ be an $n\times n$ transition matrix for an $n$-state markov chain. How do you prove that $P^2$, or even better, that $P^n$ is again a transition matrix?
My approach leaves me stuck at a certain point. Note that; $$\sum_{i=1}^{n}p_{ij}=1$$ In other words; the sum of the entries in a column of transition matrix $P$ is equal to 1. Now, moving to $P^2$ (or $P^n$ if somebody knows how): $$\left(P^2 \right)_{ij}= \sum_{k=1}^{n}p_{ik}p_{kj}$$ This is a certain entry $\left(P^2 \right)_{ij}$ of the new $n \times n$ matrix $P^2$. Then the sum of all the entries in a certain column $j$ of $P^2$ is: $$\sum_{i=1}^{n}\left(\sum_{k=1}^{n}p_{ik}p_{kj}\right)$$ and I should somehow be able to show that it's equal to $1$ in order to prove that it's again a transition matrix - but how? That is, I have to demonstrate that: $$\sum_{i=1}^{n}\left(\sum_{k=1}^{n}p_{ik}p_{kj}\right) = 1$$ Please help!
Edit: Is it possible to say that, because sums are commutative in both addition and multiplication, I could write the above expression as: $$\sum_{k=1}^{n}\left(\sum_{i=1}^{n}p_{ik}p_{kj}\right)$$ and since $p_{kj}$ is now not an active part of the inner sum, i.e. a "constant", I could rewrite the expression as:$$\sum_{k=1}^{n}p_{kj}\left(\sum_{i=1}^{n}p_{ik}\right)$$ and conclude that, because $\sum_{k=1}^{n}p_{kj}$ is essentially the sum of the entries in column $j$ of $P$, and accordingly $\sum_{i=1}^{n}p_{ik}$ is the sum of the entries in column $k$ of $P$, their product is $1 \cdot 1$ since $P$ is a transition matrix by definition, and as such the summations over columns $k$ and $j$ both equal $1$, seeing as this is a property of a given transition matrix $P$. Thus: $$\sum_{i=1}^{n}\left(\sum_{k=1}^{n}p_{ik}p_{kj}\right) = \sum_{k=1}^{n}p_{kj}\left(\sum_{i=1}^{n}p_{ik}\right) = 1\cdot1 = 1$$
Let $e=(1,\ldots,1)^T$. If, say, a matrix $A$ has row sums equal to one then equivalently $Ae=e$. If, say, a matrix $B$ has column sums equal to one then equivalently $e^TB=e^T$. This is certainly better than getting lost in a lot of sums. Also, it tells you that $e$ is a right eigenvector of $A$ and a left eigenvector of $B$.
If $P_1$ and $P_2$ are right stochastic matrices, that is, $P_1e=e$ and $P_2e=e$, then $(P_2P_1)e=P_2(P_1e)=P_2e=e$.
If $P_1$ and $P_2$ are left stochastic matrices, that is, $e^TP_1=e^T$ and $e^TP_2=e^T$, then $e^T(P_2P_1)=(e^TP_2)P_1=e^TP_1=e^T$.
You can use this to prove analogous results for any sequence of right/left stochastic matrices and their powers. This is related to the fact that if a set of matrices share a common right/left eigenvector, this vector is a right/left eigenvector of their product (in any order).