Suppose I have a standard Brownian bridge $B(t)=(W(t)|W(0)=W(1)=0)$.
Suppose further there is a finite set $S \subset [0,1]$ where we denote $s$ a generic element of $S$. Each $s$ is associated with a known $a_s \in \mathbb{R}$ and $b_s \in \mathbb{R}$ such that $B(s) \in [a_s,b_s]$. That is, the bridge passes through some defined "gates" at some points.
I wonder how I can determine the conditional distribution of $B(t)$ given this information.
There is a brute force way, by first computing the distribution of all possible Brownian bridges using the standard formulas and then truncating this distribution point-by-point, excluding all Brownian paths that do not pass through the gates.
However, given how simple everything becomes once $a_s=b_s$, it feels uneconomical and messy to do this, and I suspect there should be something very straightforward to compute this. Unfortunately, my attempts to check the literature returned nothing useful. So perhaps, I am missing something basic, and everything is super simple, and thus no one bothered to write it down. Or there is a term for this type of process that I am unaware of.
Can anyone help me out?
First, we remark that the two events belows contain the same information and they are equal
Let $\mathcal{E}$ denote this information, we have $$B_t|\mathcal E = (W_t|\{W_1 = 0\})|\mathcal{E} = W_t|(\{W_1 = 0\})\cap \mathcal{E}) = W_t|\mathcal{E}$$
Then, the conditional distribution of $B_t$ given the information $\mathcal{E}$ is the conditional distribution of $W_t$ given $\mathcal{E}$.
We determine now the distribution of $W_t|\mathcal{E}$.
For the sake of simplicity, we assume that $S$ is an open set and contains also the two points $0$ and $1$( so, we have $a_0=b_0=a_1=b_1=0$). Hence, we can write $$\mathcal{E} = \{W_s \in[a_s,b_s], \forall s\in S\} $$
As the Brownian motion $W_t$ is markovian, for all $t \not \in S$, we have $$W_t|\mathcal{E} = W_t|(\{W_x \in[a_x,b_x]\}\cap \{W_y \in[a_y,b_y]\})\tag{1}$$ with
because information before the time $x$ is all captured by $W_x$ and information after the time $y$ is all captured by $W_y$.
According to this result, we have $$(W_t\mid W_x = z, W_y = w)\sim \mathcal{N}\left(\frac{y-t}{y-x}z+\frac{t-x}{y-x}w,\frac{(y-t)(t-x)}{y-x}\right) \tag{2}$$ Let denote $\varphi(u;x,t,y,z,w)$ the density function of $(W_t\mid W_x = z, W_y = w)$, then $$\color{red}{\varphi(u;x,t,y,z,w) = \frac{1}{\sqrt{2\pi}}e^{-\frac{(u-m)^2}{\sigma^2}}} \tag{3}$$ with $$(m,\sigma) = \left(\frac{y-t}{y-x}z+\frac{t-x}{y-x}w,\sqrt{\frac{(y-t)(t-x)}{y-x}} \right)$$ Remark 1: just for information, $(2)$ is equivalent to $$(W_t\mid W_x, W_y) = \frac{y-t}{y-x}W_x+\frac{t-x}{y-x}W_y+Z$$ with $Z = \left(W_t - \frac{y-t}{y-x}W_x+\frac{t-x}{y-x}W_y \right)$ independent to $W_x$ and $W_y$, $Z$ follows the normal distribution $\mathcal{N} \left(0,\frac{(y-t)(t-x)}{y-x} \right)$.
Let $p(u)$ the density function of $W_t|\mathcal{E}$, applying the Bayes' theorem, we have $$\begin{align} p(u) &= \mathbb{P}(W_t = u|(\{W_x \in[a_x,b_x]\}\cap \{W_y \in[a_y,b_y]\})) \\ &=\frac{\mathbb{P}(\{W_t = u\} \cap (\{W_x \in[a_x,b_x]\}\cap \{W_y \in[a_y,b_y]\}))}{\mathbb{P}(\{W_x \in[a_x,b_x]\}\cap \{W_y \in[a_y,b_y]\})} \tag{4} \end{align}$$
We remind that $(W_x,W_y)$ follows the 2-dimensional gaussian $\mathcal{N}_2 (\mathbf{0}, \mathbf{\Sigma})$ of mean $\mathbf{0}$ and covariance matrix $\mathbf{\Sigma}$. $$ \mathbf{\Sigma}= \begin{pmatrix} x & x \\ x & y \end{pmatrix} $$ Then the denominator of $(4)$ can be computed easily and is equal to $\color{red}{\Phi_2 \left(\begin{pmatrix} a_x \\ a_y \end{pmatrix} ,\begin{pmatrix} b_x \\ b_y \end{pmatrix} ; \Sigma\right)} $
For the next step, we denote also $\color{red}{\varphi_2(z,w; x,t,y)}$ the density function of the 2-dimensional gaussian $(W_x,W_y)$.
The nominator of $(4)$ is equal to:
$$\begin{align} \text{N} &:=\mathbb{P}(\{W_t = u\} \cap (\{W_x \in[a_x,b_x]\}\cap \{W_y \in[a_y,b_y]\}))\\ &=\iint_{\left\{\begin{array}{rcr} z\in[a_x,b_x] \\ w\in[a_y,b_y] \end{array} \right\}}\mathbb{P}(\{W_t = u\} \cap \{W_x =z\}\cap \{W_y =w\})dzdw\\ &=\iint_{\left\{\begin{array}{rcr} z\in[a_x,b_x] \\ w\in[a_y,b_y] \end{array} \right\}}\underbrace{\mathbb{P}(\{W_t = u\} \mid \{W_x =z\}\cap \{W_y =w\})}_{\text{we will use }(3)}\cdot \underbrace{\mathbb{P}( \{W_x =z\}\cap \{W_y =w\}) }_{ =\varphi_2(z,w;x,t,y) }dzdw\\ &=\iint_{\left\{\begin{array}{rcr} z\in[a_x,b_x] \\ w\in[a_y,b_y] \end{array} \right\}}\varphi(u;x,t,y,z,w) \cdot\varphi_2(z,w; x,t,y)dzdw\\ \end{align}$$
Thus, the density function of $W_t|\mathcal{E}$ is equal to $$\color{red}{p(u) = \frac{\iint_{\left\{\begin{array}{rcr} z\in[a_x,b_x] \\ w\in[a_y,b_y] \end{array} \right\}}\varphi(u;x,t,y,z,w) \cdot\varphi_2(z,w; x,t,y)dzdw}{\Phi_2 \left(\begin{pmatrix} a_x \\ a_y \end{pmatrix} ,\begin{pmatrix} b_x \\ b_y \end{pmatrix} ; \Sigma\right)}} \tag{5}$$
Remark 2: In general, $(5)$ cannot computed analytically but it can be computed numerically easily as the nominator and the denominator are both double integral.