Let $\hat{\beta}$ be an estimator for $\beta$. If $\hat{\beta}- \beta \overset{p}{\rightarrow} 0$, what does it take for $\sqrt{n}(\hat{\beta}- \beta) \overset{p}{\rightarrow} 0$?
Suppose $\hat{\beta} = \frac{1}{n} + \beta$, then $\hat{\beta}- \beta \overset{p}{\rightarrow} 0$ and $\sqrt{n}(\hat{\beta}- \beta) \overset{p}{\rightarrow} 0$.
However, if $\hat{\beta} = \frac{1}{\sqrt{n}} + \beta$, then $\hat{\beta}- \beta \overset{p}{\rightarrow} 0$ but $\sqrt{n}(\hat{\beta}- \beta) \overset{p}{\rightarrow} 1$.
I'm a bit stumped. What are the condition(s) on $\hat{\beta}$ so that $\sqrt{n}(\hat{\beta}- \beta) \overset{p}{\rightarrow} 0$?
This question is vague, since the class of estimators is so broad that there is no general condition for $\sqrt{n}(\hat{\beta} - \beta) \stackrel{\Pr}{\rightarrow} 0$ - you have to determine whether each estimator $\hat{\beta}$ satisfies this condition separately.
As a general note, though, for most usual estimators, $\sqrt{n}(\hat{\beta} - \beta)$ converges in distribution and does not converge in probability. Estimators that do satisfy $\sqrt{n}(\hat{\beta} - \beta) \stackrel{\Pr}{\rightarrow} 0$ usually take advantage of some irregularity in the model.
The classic example of this is the model $X_i \sim U(0, \theta)$ with $\hat{\theta}_ n = \max\{X_1, \dotsc, X_n\}$, which has the limiting distribution $$ n(\hat{\theta}_n - \theta) \stackrel{\mathrm{d}}{\rightarrow} -Y, $$ where $Y \sim \mathrm{Exp}(1/\theta)$, and so $\sqrt{n}(\hat{\theta}_n - \theta) \stackrel{\Pr}{\rightarrow}0$. This happens because the support of $X_i$ depends on $\theta$, which is exactly the sort of thing that I mean when I write about "irregularity in the model."
PS. The $\hat{\beta}$ that you define in your question are not estimators, since they depend on $\beta$, whereas estimators must depend only on the data.