In Kumagai's book Random Walks on Disordered Media and their Scaling Limits, he says that, given $A \subseteq \mathbb{Z}^{d}$ and a symmetric Markov process $(Y_{n})$ on $\mathbb{Z}^{d}$ with transition matrix $P$, the function $$\varphi(x) = \mathbb{E}^{x}(f(Y_{\sigma_{A}}) \, \mid \, \sigma_{A} < \infty), \quad \sigma_{A} = \inf\{n \in \mathbb{N}_{0} \, \mid \, Y_{n} \in A\}$$ is a solution of the Dirichlet problem $$(P - I)\varphi = 0 \quad x \in A^{c}$$ with initial data $f : A \to \mathbb{R}$.
In the proof, he simply says $$\varphi(x) = \sum_{y \in \mathbb{Z}^{d}} P(x,y) \varphi(y)$$ "by the Markov property of $Y$."
Is it really that simple? I see that $\sigma_{A}$ is a stopping time, but it seems like it's still necessary to break the term $$\mathbb{E}^{x}(f(Y_{\sigma_{A}}) \, \mid \, \sigma_{A} < \infty, Y_{1} = y)$$ into cases $y \in A$ and $y \in A^{c}$.
The author did not write $\varphi(x) = \mathbb{E}^{x}(f(Y_{\sigma_{A}}) \mid \sigma_{A} < \infty)$, he wrote $\varphi(x) = \mathbb{E}^{x}(f(Y_{\sigma_{A}}) : \sigma_{A} < \infty)$. He doesn't mean conditional expectation, but simply expectation over the set $\{\omega: \sigma_A(\omega)<\infty\}.$ We may rewrite the definition as $\varphi(x) = \mathbb{E}^{x}(f(Y_{\sigma_{A}}) \,{\bf 1}_{\{\sigma_{A} < \infty\}})$.
For $x\notin A$, $\sigma_A=1+\sigma_A\circ\theta_1$ holds $\mathbb{P}^x$-almost surely. This is the crucial point. It says that in order to hit $A$ from $x$, the Markov chain has to take at least one step.
This implies that, for $x\notin A$, $Y_{\sigma_A}(\omega)=Y_{\sigma_A}(\theta_1\omega)$ holds $\mathbb{P}^x$-almost surely. The chain's position at time zero has no effect on the hitting place.
Therefore $$\varphi(x)=\mathbb{E}^{x}(f(Y_{\sigma_A}) \,{\bf 1}_{\{\sigma_{A}<\infty\}}) =\mathbb{E}^{x}([f(Y_{\sigma_A})\circ \theta_1] \,{\bf 1}_{\{\sigma_{A}\circ\theta_1<\infty\}}) =\sum_{y\in\mathbb{Z}^d} P(x,y) \varphi(y).$$
Such manipulations will seem simple and natural when you've gotten more experience with Markov chains, I promise!