Notation: $f(n)=O(1)$ means the function is bounded above by a constant.
Setup: Given $\beta_j\stackrel{iid}{\sim}\text{Bernoulli}(\nu)$ for some $\nu=O(1)$ and \begin{align*} X_j=-\alpha\sqrt{\frac{1}{n\alpha(1-\alpha)}} \end{align*} with probability $1-\alpha$ and \begin{align*} X_j=(1-\alpha)\sqrt{\frac{1}{n\alpha(1-\alpha)}} \end{align*} with probability $\alpha$, where $\alpha=O(1)$. Note that $X_j$'s are identically and independently distributed. It is straightforward to show that $\mathbb{E}[X_j]=0$ and $\text{Var}[X_j]=\frac{1}{n}$. We are looking at the asymptotic regime where $p\rightarrow\infty$, $n\rightarrow\infty$, and $\frac{n}{p}=O(1)$.
Problem: I want to show the following: \begin{align*} \sum_{j=1}^pX_j\beta_j=O(1), \end{align*} with high probability (i.e., probability going to one as $n,p\rightarrow\infty$), but the best I could do is $O(\sqrt{n})$ (see my attempt below). How can I do better to achieve $O(1)$? Intuition tells me that this is possible because if we were to replace $X_j$'s distribution with i.i.d. $N(0,1/n)$, then $O(1)$ would be possible. Thanks.
My attempt: Notice that $X_j$'s are bounded random variables which implies that they are sub-Gaussian random variables. Then by some concentration result (e.g., Chernoff bound) we can should that $X_j=O(\frac{1}{\sqrt{n}})$ with high probability. Furthermore, we also have $X_j\beta_j=O(\frac{1}{\sqrt{n}})$ because $\beta_j\in\{0,1\}$. Summing them up over all $p$ entries, we have the upper bound $O(\frac{p}{\sqrt{n}})=O(\sqrt{n})$.