Consider the stochastic differential equation $$\dot X_t^\varepsilon =b(X_t^\varepsilon )+\varepsilon \sigma (X_t)dB_t,$$ where $X_0^\varepsilon =x_0$.
I have a theorem that says that if $$|b(x)-b(y)|+|\sigma (x)-\sigma (y)|\leq K|x-y|$$ and $$|b(x)|+|\sigma (x)|\leq K(1+|x|),$$
then for all $t>0$ and all $\delta >0$, $$\mathbb E[|X_t^\varepsilon -x_t|^2]\leq \varepsilon ^2a(t)\quad \text{and}\quad \lim_{\varepsilon \to 0}\mathbb P\left\{\max_{0\leq s\leq t}|X_s^\varepsilon -x_s|>\delta \right\}=0,\tag{1}$$ where $\dot x_t=b(x_t)$ and $a$ is an increasing positive function.
Could someone explain what exactly mean $(1)$ ? I see that $\max_{0\leq s\leq t}|X_s^\varepsilon -x_s|$ converges to $0$ in probability, and also that $X_t^\varepsilon \to x_t$ in $L^2$, but I can't really interpret what it really mean.
This result says basically that as the noise term goes to zero, the solution of the SDE converges to the solution of the corresponding ODE. It is somewhat a weak version of the Freidlin–Wentzell theorem.
Your result (1) may be refined (at least if $\sigma \equiv 1$). For all $\delta>0$ there exists constants $c_1$ and $c_2$ such that: $$ \mathbb{P} (\max_{t \in [0,1]} |X^{\epsilon}_t −x_t|> \delta )\leq c_1 exp(-\frac{c_2}{\epsilon^2}). $$
Such result can be proven quite easily using the Grönwall's inequality.
The term $\epsilon^2$ is important! I would say that the interpretation is the following: the noise perturbates the ODE, but because the noise is in average zero, its impact on the trajectory is really small, as long as $\epsilon$ is small.