It is a well-known fact that $$\lambda^{1/2}B(\lambda^{-1}t)\stackrel{d}{=}B(t) \ ,$$ for $B$ a $1$-dimensional brownian motion. Also, we know that for $\xi$ a $1\text{D}$ white noise, we have $$\lambda^{1/2}\xi(\lambda t)\stackrel{d}{=}\xi(t) \ .$$ (I'm abusing notation and treating $\xi$ as a function rather than a distribution. It would be more appropriate to say that for all "suitable" functions $f$, we have that $\lambda^{-1/2}\xi\left(t\mapsto f(\lambda^{-1} t)\right)\stackrel{d}{=}\xi(f)$). This is quite a surprising fact on its own, because the white noise blows up when you zoom in instead of out, which wouldn't happen for any "nice" function.
Now, I'm trying to see if I can derive this scaling of the white noise from the one of the brownian motion. That is, if we define $$B_\lambda(t):=\lambda^{1/2}B(\lambda^{-1}t)\stackrel{d}{=}B(t)$$ and (treating it again as a function) knowing that $$\xi(t) = \partial_tB(t) \ ,$$ then $$\xi_\lambda(t) := \partial_tB_\lambda(t) \stackrel{d}{=}\partial_t B(t) = \xi(t) \ .$$ But now, in order to obtain $\xi_\lambda(t)$ I do $$\xi_\lambda(t) = \partial_tB_\lambda(t) = \partial_t\left(\lambda^{1/2}B(\lambda^{-1}t)\right) = \lambda^{-1/2}\partial_t B(t) = \lambda^{-1/2}\xi(t) \ ,$$ which is absurd.
So what's the wrong step in this?