I have a Markov chain on a $d$-regular graph $G(V,E)$. There is a subset of vertices $B$ containing first $b$ vertices. Let $\textbf{b}$ denote an $n \times 1$ vector whose first $b$ entries are $1$ ($b<n$) and remaining are $0$ (last $n-b$ entries are $0$): \begin{equation} \begin{split} \textbf{b} &= \{1,1,\ldots,1,0,0\ldots,0\}^{T} \end{split} \end{equation}
The $n \times n$ adjacency matrix, $A$, for $G$ is given as
\begin{equation} A(x,y) = \begin{cases} 1, & \text{if vertices $x$ and $y$ are neighbours in $G$}\\ 0, & \text{otherwise} \end{cases} \end{equation}
We can define the transition probability matrix, $\mathbf{P}$, for random walk on the graph using $A$.
\begin{equation} {\bf P}(x,y) = \frac{A(x,y)}{d} \end{equation}
Now I begin my random walk $(X_t)_{t \geq 0}$ from one of the vertices of set B. I want to remain in set B after one step.
\begin{equation} {\bf P}(X_1 \in B|X_0 \in B) = \frac{{\bf b^TPb}}{b} \end{equation}
I want to bound this probability: $\hspace{2mm} \frac{{\bf b^TPb}}{b}$
How can I calculate a bound on this value in terms of $n$, $d$ and $b$?
Well, $\frac12{\bf b^TAb}$ is just counting the edges in the subgraph induced by $\{v_1, v_2, \dots, v_b\}$, and ${\bf b^TPb} = {\bf b^TAb}/d$. So in full generality, you can't do better than $\frac12{\bf b^TAb} \le \binom{b}{2}$ (when $b$ is small) and $\frac12{\bf b^TAb} \le \frac12 bd$ (when $b$ is large) corresponding to $\frac1b {\bf b^TPb} \le \frac{b-1}{d}$, which doesn't do much for us when $b > d$.
Requiring that $G$ is connected is a good first step, but doesn't do much for your bound. It may be that there are very few edges connecting $\{v_1, v_2, \dots, v_b\}$ to the rest of the graph, in which case you could still end up with a bound like $1 - \frac1{bd}$ or so.
What you really want to say is something about the edge expansion of $G$. We can get bounds on this from the eigenvalues of $\bf A$, for instance. If $\lambda_2$ is the second-largest eigenvalue (the largest eigenvalue is trivially $d$) and $b \le \frac n2$, then there are at least $\frac{d-\lambda_2}{2} \cdot b$ edges from $\{v_1, v_2, \dots, v_b\}$ to $\{v_{b+1}, v_{b+2}, \dots, v_n\}$. So we get a bound of $\frac1b {\bf b^TPb} \le \frac{d + \lambda_2}{2d}$ instead.
(See, e.g., Theorem 4 in these notes.)