Given a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ , and a stochastic process $\{X_n: \Omega \rightarrow S\}$ (assume $S$ is countable). The definition of Markov chain is there exists a transition probability $p$ such that $$\mathbb P[\{X_{n+1} = y\}|\sigma(X_0,\cdots, X_n)] = p(X_n, y)$$
We define a new probability space $(S^\mathbb{N}, \otimes_{n\in \mathbb{N}}\mathcal{S}, \tilde{\mathbb{P}})$ by applying the Kolmogorov extension theorem on the joint distribution $\mu_{(X_0, \cdots, X_k)}$ and sending $k$ to infinity. Also define $\tilde X_n$ to be the projection map for $\omega \in S^\mathbb{N}$ where $\tilde X_n(\omega) = \omega_n$
(It seems that $\mathbb{P}$, $\tilde{\mathbb{P}}$ and $X_n, \tilde X_n$ are often used interchangeably.)
Now if we were given that $X_0$ has an initial distribution which is a Dirac at the point $x\in S$, what is the exact definition of $\mathbb{P}_x$ (it is actually $\tilde{\mathbb{P}}_x$).
Durrett only says $\mathbb{P}_x$ is a basic object since in general, we have $$\mathbb{P}_\mu(y)= \int_S \mathbb{P}_x(y) d\mu(x).$$ Furthermore, according to Durrett's notation for Markov property, here it should really be $\tilde{\mathbb{P}}_x$ instead of ${\mathbb{P}}_x$.
Just given a r.v $X_0:\Omega \rightarrow S$ with some distribution $\mu$, and some r.v. $H: \Omega \rightarrow S$, I know the defintion of $$\mathbb{E}[H|X_0 = x] = g(x)$$ where $g$ is a $\mu$-a.e. defined function from $$\mathbb{E}[H|\sigma(X_0)] = g(X_0).$$ Now for some measurable $A\subset S^\mathbb{N}$, I don't think we can say $$\tilde{\mathbb{P}}_x(A) = \tilde{\mathbb{E}}[1_A | \tilde X_0 = x]$$ because the right hand side is defined regardless what the distribution of $X_0$ is...
Edit:
So suppose $X_0$ has some distribution $\mu$, it would still make sense to define "$\tilde{\mathbb{P}}_x(A) = \tilde{\mathbb{E}}[1_A | \tilde X_0 = x]$" which will give us a function of $x$ that is defined up to $\mu$ a.e. Then the notation $\tilde{\mathbb{P}}_x(A)$ is not only defined for when $X_0$ has the Dirac distribution at $x\in S$. But when we see "$\mathbb{P}_x$" why do we always assume "the process starts at $x$" or to say $X_0=x$ a.e., $\mu_{X_0} = \delta_x$?
According to Durrett, the measure $\tilde{\mathbb{P}}_x$ not obtained from "$\tilde{\mathbb{P}}$ conditioning on $X_0$".
The first step is to fix some initial distribution $\mu$ for $X_0$, then apply Kolmogorov Extension theorem to this specific case $$ \mathbb{P}(X_0\in B_0, \cdots X_n \in B_n) = \int_{B_0} \mu(dx_0) \int_{B_1} p(x_0, dx_1) \cdots\int_{B_n} p(x_{n-1}, dx_n)$$ to obtain a measure $\tilde{\mathbb P}_\mu$ on the sequence space $(S^{\mathbb{N}}, \mathcal{S}^{\mathbb{N}})$.
So for different $x,y \in S$, we actually apply Kolmogorov Extension theorem twice (for $X_0 \sim \delta_x$ and $X_0 \sim \delta_y$) to obtain $\tilde{\mathbb{P}}_x$ and $\tilde{\mathbb{P}}_y$.