Setup: Given that we have a signal $\beta\in\{0,1\}^p$ that we want to estimate. We have an estimate $\hat{\beta}\in\mathbb{R}^p$ (entries could be continuous) such that the mean squared error $$ \frac{1}{p}\|\hat{\beta}-\beta\|_2^2 \rightarrow 0 \text{ as $p\rightarrow\infty$.} $$
Question: Given that I quantize the entries of $\hat{\beta}$ such that $$ \tilde{\beta}_j = \begin{cases} 1 &\text{ if $\hat{\beta}_j>0.5$} \\ 0 &\text{ if $\hat{\beta}_j\leq0.5$}. \end{cases} $$ I believe (intuitively) that we have $$ \frac{1}{p}\|\tilde{\beta}-\beta\|_2^2 \rightarrow 0 \text{ as $p\rightarrow\infty$.} $$ Is there a formal way to show this? In other words, does quantization of the estimate affect its limiting MSE?