I am reading different papers on parameter estimation in the Kalman-Bucy filter scheme for small noise ("ON FREQUENCY ESTIMATION FOR PARTIALLY OBSERVED SYSTEM WITH SMALL NOISES IN STATE AND OBSERVATION EQUATIONS" (Chernoyarov, Kutoyants, Mariana Marcokova, 2018), among others). We have two SDEs: \begin{align*} dX_t=f(\vartheta_0t)Y_tdt+\varepsilon dW_t\\ dY_t=-a(t)Y_tdt+b(t)\psi(\varepsilon)dV_t, \end{align*} where $W_t$ and $V_t$ are Brownian Motions, $\psi(\varepsilon)=\mu \varepsilon$ for some $\mu$ and all other functions are "nice". It is well known that the conditional expectation $m(\vartheta,t)=\mathbb{E}_\vartheta[Y_t|X_s, 0\leq s \leq t]$ satisfies the Kalman-Bucy filter: \begin{align*} dm(\vartheta,t)=-a(t)m(\vartheta,t)dt+\gamma_*(\vartheta,t)f(\vartheta t)[dX_t-f(\vartheta t)m(\vartheta,t)], \end{align*} Riccati equation \begin{align*} \frac{\partial \gamma_*(\vartheta,t)}{\partial t}=-2a(t)\gamma_*(\vartheta,t)-\gamma_*^2(\vartheta, t)f^2(\vartheta t)+\mu^2b^2(t) \end{align*} when defining $\gamma_*(\vartheta,t)=\varepsilon^{-2}\gamma(\vartheta,t)$.
My question is: What happens to $m(\vartheta, t)$ when $\varepsilon \rightarrow 0$? We know that $Y_t$ converges to $y_t$ as $\varepsilon \rightarrow 0$ where $y_t$ is the solution to the respective SDE for $\varepsilon =0$. The paper claims that the limit of $m(\vartheta,t)$ for $\varepsilon \rightarrow 0$ is $m_0(\vartheta,t)=y_0\exp(-\int_0^t a(s)ds)$ (which exactly is the solution of the equation of $Y_t$ for $\varepsilon =0$!) which is very intuitive, but I don't know how to prove it.