I'm currently trying to prove Exercise 1.19 in Liggett's Continuous Time Markov Processes, which asks me to prove the following.
Let $(B_t)_{t\ge0}$ be a Brownian motion, and let $X_t=B_t-t B_1$ for $0\le t\le 1$. Then show that for $0<t_1<\cdots<t_n<1$, the distribution of $(B_{t_1},\ldots,B_{t_n})$ conditioned on $|B_1|\le\varepsilon$ converges to the distribution of $(X_{t_1},\ldots,X_{t_n})$ in law.
Now, I'm relatively familiar with the characterization of Gaussian processes, and methods of proving convergence in law (by looking at the characteristic functions for example) but I have very few tools at my disposal for proving convergence in law of a conditional distribution.
Furthermore, I have seen the problem stated before as showing that $B_{[0,1]}$ conditioned on $B_1=0$ is simply $X_{[0,1]}$ up to finite-dimensional distributions — but in what sense does it make sense to condition on $\{B_1=0\}$? Is this just notation for the law conditioned on $| B_1|\le\varepsilon$ as $\varepsilon\to\infty$ or is there a larger idea or theory here?
A convenient way to compute the conditional PDF $f_{X\mid Y=y}$ of some random vector $X$ conditioned on $Y=y$, when $(X,Y)$ has a PDF $f_{X,Y}$, is the formula $$f_{X\mid Y=y}(x\mid y)=\frac{f_{X,Y}(x,y)}{f_Y(y)}\tag{$\ast$}$$ Choosing $X=(B_{t_1},\ldots,B_{t_n})$ for some $0<t_1<\cdots<t_n\leqslant1$, $Y=B_1$ and $y=0$, this explains the "direct" meaning of the conditional distribution of $(B_t)_{0\leqslant t\leqslant1}$ conditionally on $B_1=0$.
Re the approximation procedure you are mentioning, note that, by definition, for every $y$, every positive $\epsilon$, and every bounded test function $u$, $$E(u(X)\mid |Y-y|<\epsilon)=\frac{E(u(X);|Y-y|<\epsilon)}{P(|Y-y|<\epsilon)}=\frac1{H(\epsilon,y)}\int_\mathbb R u(x)G(\epsilon,x,y)dx$$ where $$G(\epsilon,x,y)=\int_{y-\epsilon}^{y+\epsilon} f_{X,Y}(x,z)dz\qquad H(\epsilon,y)=\int_{y-\epsilon}^{y+\epsilon} f_Y(z)dz$$ At least when $f_{X,Y}$ is regular enough, one gets, when $\epsilon\to0^+$, $$G(\epsilon,x,y)\sim2\epsilon f_{X,Y}(x,y)\qquad H(\epsilon,y)\sim2\epsilon f_Y(y)$$ Thus, assuming the limits and the integral signs can be exchanged, $$ \lim_{\epsilon\to0}\,E(u(X)\mid |Y-y|<\epsilon) = \frac{\int_\mathbb R u(x)f_{X,Y}(x,y)dx}{f_Y(y)}$$ that is, $$E(u(X)\mid Y=y) =\int_\mathbb R u(x)f_{X\mid Y=y}(x)dx$$ which somehow reproves our formula $(\ast)$ above.