I am reading the book Markov Chains and Stochastic Stability from Meyn and Tweedie. They define Markov chains on a measurable state space $(E,\Sigma)$ (Chapter 3.4) and they define it on the space $\Omega = \prod_{i \in \mathbb{N}}E, $ with an $\sigma$-algebra $\mathcal{A}$ which is the smallest $\sigma$-algebra that contains all cylinder sets with only finitly many sets different from $E$ $$A_1 \times A_2 \times \dots A_n \times E \times E \times \dots$$
Then they define the Markov chain as a family of random variables $(X_n)_{n \in \mathbb{N}}$ where for $\omega=(x_n)_{n \in \mathbb{N}}\in \Omega$ they set $$X_n(\omega)=x_n .$$
Thus, all Markov chains are defined on the same set $\Omega$ and the random variables $(X_n)$ are also always the same. Now if they talk about a certain initial distribution $\mu$ and transition kernel $p(x,A)$; then they assoicate a Markov chain to it by constructing a specific measure $\mathbb{P}_\mu$. Thus, by this definition, two Markov chains only differ on the probability measure of the probability space.
My problem is that in the book they define the term $$ \mathcal{F}_n = \sigma(X_0,\dots,X_n) \subseteq \mathcal{B}(X^{n+1})$$ and they say
which is the smallest $\sigma$-field for which the random variable $\{X_0,\dots,X_n\}$ is measurable. In many cases $\mathcal{F}_n$ will coincide with $\mathcal{B}(X^{n+1})$, although this depend in particular on the initial measure $\mu$ choosen for a particular chain.
How can $\mathcal{F}_n$ depend on the initial measure? The random variable is already defined as $X_n(\omega)=x_n$, and thus the measurability of $\{X_0,\dots,X_n\}$ depends only on $\Sigma$ and $\mathcal{A}$ where does the intial measure $\mu$ comes into play?
Update: After seeing the answers, I think it is a good thing to provide my question with an example. Lets consider the case where $E=\{1,2\}$ and $\Omega = E \times E$, then the random variables $X_0$ and $X_1$ are already defined as above, in particular $X_0$ is defined as $$ X_0 ((1,1))=X_0((1,2))=1$$ and $$X_0((2,1))=X_0((2,2))=2.$$ Now if $\mathbb{P}_\mu$ is the probability that $X_0 = 1 $, then we must have $$ \mathbb{P}_\mu[\{(1,1),(1,2)\}]=1.$$ But this is completely independent from defining $\mathcal{F}_0$ (or $\mathcal{F}_n$). In this case we always have $$\mathcal{F}_0 = \{\{(1,1),(1,2)\},\{(2,1),(2,2)\},E,\emptyset \} $$ which does not depend on $\mu$. It seems to me that in the answers one believes that $\mathbb{P}_\mu[\{(2,1),(2,2)\}]=0$ implies somehow that this set should not belong to $\mathcal{F}_0$, but I think this is not correct.
$X_0$ is the random variable variable with measure $\mu$ so $\mathcal{F_n}$ does depend on its definition.
If $\mu$ was a discrete random variable it could give very different possible paths based on the number of outcomes. If $\mu$ allowed for one only one outcome $0$ then the minimum sigma field wrt which it is measurable is $\{\phi, \{0\}, Z \ \{0\}, Z\}$ where $Z$ is the state space. In comparison, a continuous random variable might allow for many, many more outcomes still, i.e. a much large sigma algebra.
To be concrete, compare Brownian Motion with a fixed initial condition, versus normally distributed initial conditions.