Let $x_1,x_2,\ldots,x_n \in \mathbb{R}^d$ be vectors and $a_1, a_2, \ldots, a_n \in \mathbb{R}$ be random iid scalars distributed by $N~(0,\sigma^2).$
Then, is it possible to lower bound the probability that $\lVert\sum a_i x_i\rVert \geq \lVert x_1 \rVert$?
Edit: We can represent $x_i$'s as columns of a matrix X and represent $a_i$'s as a vector $a$~$N(0,\sigma^2I)$. In that case, $\lVert\sum a_i x_i\rVert =\lVert Xa\rVert$. I have tried to find some lower bound as:$\lVert\sum a_i x_i\rVert =\lVert Xa\rVert \geq \lVert a \rVert \lVert x_1 \rVert$ and then find a lower bound of $Pr[ \lVert a \rVert \geq1]$, but couldn't succeed (actually tried to use the fact that $\lVert Xa\rVert \geq \sigma_{min}(X) \lVert a \rVert)$
To find $P(\lVert\sum a_i x_i\rVert \geq \lVert x_1 \rVert)$ you must first find a CDF of $\lVert\sum a_i x_i\rVert$. If we denote it $f$, then the corresponding probability is $1-f(\lVert x_1 \rVert)$. But how to find $f$.
First, let's find the distribution of $\sum a_i x_i$. To do that, we represent the vectors as the columns of $d \times n$ matrix $\begin{pmatrix} x_{1,1} & ... & x_{1, n} \\ ... & ... & ... \\ x_{d, 1} & ... & x_{d, n} \end{pmatrix}$. Denote it's rows ar $y_1, ... y_d$. Then, $\sum a_i x_i \sim N\Big(0, \sigma^2\begin{pmatrix} (y_1, y_1) & ... & (y_1, y_d) \\ ... & ... & ... \\ (y_d, y_1) & ... & (y_d, y_d) \end{pmatrix}\Big)$.
So, we now have a normal vector with known mean and covariance matrix and want to find the distribution of this norm. The question how to do that is answered here.