I am reading about an algorithm for finding minimum-weight words in large linear codes. Let $c$ be the codeword of weight $w$ to recover (with size $n$ and in $GF(2)$). Let $N = \left\{1, 2, \ldots, n\right\}$ and $I\subset N$ ($|I| = k$). Let $\left\{X_i\right\}_{i\in N}$ be a stochastic process which corresponds to the number of nonzero bits of $c$ in $I$ and the random variable $X_i$ take its values in the set $\left\{1, 2, \ldots, w\right\}$. The state space of the stochastic process is:
$$E = \left\{1, \cdots, 2p - 1\right\} \cup \left\{2p\right\} \cup \left\{2p + 1, \cdots, w\right\}$$
where
$X_i = u$ iff $|I\cap \textrm{supp}(c)| = u$, $\forall u \in \left\{1, \cdots, 2p-1\right\} \cup \left\{2p + 1, \cdots, w\right\}.$
My questions are: Why the initial probabilty vector is $\pi_0(u)=\dfrac{C^{w}_u C^{n-w}_{k-u}}{C^{n}_k}$, if $u \notin \left\{{2p}\right\}$. And the most important: How to interpret these combinatorials?.
Definition: $\textrm{supp}(c)$ is a set with coordinates of nonzero bits in $c$.
The question is hard to understand, with a lot of focus on a quantity $2p$ that you say is irrelevant, and crucial information missing. You didn't introduce $w$ and $k$, so it can only be guessed that $w=|I|$ and $k=|\operatorname{supp}(c)|$. Under the additional assumption that $c$ is uniformly randomly drawn from all vectors with $k$ non-zero bits, the expression
$$ \frac{\binom wu\binom{n-w}{k-u}}{\binom nk} $$
gives the probability of $c$ having exactly $u$ non-zero bits in $I$ as the ratio of the number of possibilities of choosing $u$ of $w$ bits in $I$ and $k-u$ of $n-w$ bits outside $I$ to be non-zero to the total number of possibilities of choosing $k$ out of $n$ bits to be non-zero.