I am looking at section 2.2 of this pdf: https://www.minet.uni-jena.de/Marie-Curie-ITN/EoF/talks/jeanblanc_introduction.pdf
Here is what it says:
Let us start with a Brownian Motion ($\operatorname{BM}) \left(B_{t}, t \geq 0\right)$ and its natural filtration $\mathbb{F}^{B} .$ Define a new filtration as $\mathcal{F}_{t}^{\left(B_{1}\right)}=\mathcal{F}_{t}^{B} \vee \sigma\left(B_{1}\right) .$ In this filtration, the process $\left(B_{t}, t \geq 0\right)$ is no longer a martingale.
The Brownian bridge $\left(b_{t}, 0 \leq t \leq 1\right)$ is defined as the conditioned process $\left(B_{t}, t \leq 1 \mid B_{1}=0\right)$. Note that $B_{t}=\left(B_{t}-t B_{1}\right)+t B_{1}$ where, from the Gaussian property, the process $\left(B_{t}-t B_{1}, t \leq 1\right)$ and the random variable $B_{1}$ are independent. Hence $\left(b_{t}, 0 \leq t \leq 1\right) \stackrel{\text { law }}{=}\left(B_{t}-t B_{1}, 0 \leq t \leq 1\right)$. The Brownian bridge process is a Gaussian process, with zero mean and covariance function $s(1-t), s \leq t$. Moreover, it satisfies $b_{0}=b_{1}=0$. We can represent the Brownian bridge between 0 and $y$ during the time interval $[0,1]$ as \begin{equation*} \left(B_{t}-t B_{1}+t y ; t \leq 1\right) \end{equation*} More generally, the Brownian bridge between $x$ and $y$ during the time interval $[0, T]$ may be expressed as \begin{equation*} \left(x+B_{t}-\frac{t}{T} B_{T}+\frac{t}{T}(y-x) ; t \leq T\right) \end{equation*} where $\left(B_{t} ; t \leq T\right)$ is a standard BM starting from $0$.
Proposition:
Let $\mathcal{F}_{t}^{\left(B_{1}\right)}=\cap_{\epsilon>0} \mathcal{F}_{t+\epsilon} \vee \sigma\left(B_{1}\right) .$ The process \begin{equation*} \beta_{t}=B_{t}+\int_{0}^{t \wedge 1} \frac{B_{1}-B_{s}}{1-s} d s \end{equation*} is an $\mathbb{F}^{\left(B_{1}\right)}$-martingale, and an $\mathbb{F}^{\left(B_{1}\right)}$ Brownian motion. In other words, \begin{equation*} B_{t}=\beta_{t}-\int_{0}^{t \wedge 1} \frac{B_{1}-B_{s}}{1-s} d s \end{equation*} is the decomposition of $B$ as an $\mathbb{F}^{\left(B_{1}\right)}-$ semi-martingale.
The author deduces from this proposition the following:
We obtain that the standard Brownian bridge $b$ is a solution of the following stochastic equation (take care about the change of notation) \begin{equation*} \left\{\begin{array}{l} d b_{t}=-\frac{b_{t}}{1-t} d t+d W_{t} ; 0 \leq t<1 \\ b_{0}=0 \end{array}\right. \end{equation*}
How does the proposition imply the SDE form ?
First, by definition, the stochastic "differential equation" $$ dB_t = \frac{B_1-B_s}{1-s}ds + d\beta_t. $$ is really short-hand for the stochastic integral equation $$ B_t = B_0 + \beta_t - \int_0^{t} \frac{B_1-B_s}{1-s}ds .$$ That part is just a matter of definitions, where here we consider $B_0 = 0$. This gives a description in the measure theoretic sense of the conditional distribution of $B_t$ given the random variable $B_1$. The goal here is to show that we can just "set $B_1 = 0$" in this expression to obtain an SDE that describes $B_t$ on the event $\{B_1 = 0\}$. This is actually a particularly nice SDE:
This SDE is really just an ODE in disguise, in fact. The operator $F:C[0,1] \times \mathbb{R} \to C[0,1]$, defined implicitly via $$ F(g,x)_t = g_t - \int_0^{t} \frac{x-F(g,x)_s}{1-s}ds $$ has a unique solution and, in fact, is continuous as a function of both coordinates. Those facts all follow from an argument involving Gronwall's inequality. This is an unusually well-behaved SDE, which makes it a little easier to see what is going on.
As you note, the SDE above gives the semi-martingale decomposition of $B_t$ in the filtration $\mathcal{F}_t^{(B_1)}$, where $\beta_t$ is $\mathcal{F}_t^{(B_1)}$ standard Brownian Motion. Let's first observe that $B_1$ is $\mathcal{F}_0^{(B_1)}$ measurable, which is immediate from the definition. This is really the key point: the Markov property of $\beta_t$ in the filtration $\mathcal{F}_t^{(B_1)}$ implies that $\beta_t = \beta_t - \beta_0$ is independent of $\mathcal{F}_0^{(B_1)}$ and therefore of $B_1$. Conditioning, for example, on $\{|B_1| \leq \epsilon\}$, which has positive probability, will not impact the distribution of the process $\beta$. By continuity then, we see that the conditional distribution of the random variable $F(\beta,B_1)$ will converge to that of $F(\beta,0)$ as we send $\epsilon \to 0$.
More generally, following the usual measure-theoretic treatment of problems like these, we would just take $F(\beta,0)$ to be the definition of $F(\beta,B_1)$ on the event $B_1=0$ because $\beta$ and $B_1$ are independent and the basic fact about measure theory recorded as Theorem 1.26(x) here.