Suppose we have the following Markov chain with $S=\{a,b,c,d,e,f,g\}$
With the given transition matrix
$ P=\begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 1/6 & 1/6 & 0 & 1/6 & 1/2 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1/3 & 0 & 0 & 1/3 & 1/3 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{pmatrix} $
The stationary distribution is given by
$$\pi P=\pi$$
For our case we have
$$(x_1,\ldots,x_7)\cdot P=(x_1,\ldots,x_7)$$
with $\sum_{i=1}^7x_i=1 \ \ \ \text{(1.0)}$
That would mean we have a large system of linear equations with a constraint (1.0). That would be hard to calculate it. Is there an easy way to this.

This Markov chain is reversible, as is any finite Markov chain whose underlying graph is a tree; like all reversible finite Markov chains, it can be viewed as random walk on a weighted graph.
If you give the edges $\{e,c\}$, $\{e,f\}$, $\{e,g\}$ each weight $3$, and the other edges weight $1$, then at each step the walker is choosing from the neighbors of its current state with probability proportional to the weight on that edge.
For a random walk on a weighted graph, the stationary distribution at a vertex is proportional to the sum of the weights of the edges at that vertex (loops only count once, but there aren't any in this example). Here we get $$\frac{1}{24}(1,1,6,1,9,3,3)$$ Note that $24$ is twice the sum of the edge weights.
Page 63 of Aldous-Fill writes this down in general (as, I'm sure, does your textbook).