I was reading the paper "On the probability that a random $\pm 1$ matrix is singular". In the paper the author defined the following notations:
$M_n$: a random $n\times n$ matrix with i.i.d entries such that each entry takes values $\pm 1$ with equal probability.
$p(a)$: for every $a\in\mathbb{R}^n\setminus\{0\}$, define $$p(a)=\mathbb{P}(\epsilon^Ta=0),$$ where $\epsilon$ is a vector drawn uniformly from $\{\pm 1\}^n$.
$E_a$: for every $a\in\mathbb{R}^n\setminus\{0\}$, $E_a$ is the event "$M_na=0$".
$L_i$: the event "the $i-$th row of $M_n$ is a linear combination of the other $n-1$ rows".
Then the author claimed that for any $p_0\in(0,1)$ and $1\leq i\leq n$ we have $$\mathbb{P}([\cup\{E_a: p(a)\leq p_0\}]\cap L_i)\leq p_0.$$
This is where I got confused. The author didn't provide a proof. I am having trouble to convince myself that the above inequality is true. Anybody willing to help? I will be greatly appreciated.
Modulo the following claim (which is true for low dimensions, and is quite plausible, but I didn't find a proof), it can be done.
First note that this implies that if $a$ is orthogonal to a $k$-codimensional subspace generated by $(\pm1)$ vectors, then $p(a)\geq2^{-k}$. Indeed, by the claim, there is at least $2^{n-k}$ $(\pm1)$ vectors that $a$ is orthogonal to, so there is at least $2^{-k}$ probability of selecting one of them.
A second consequence of the claim is that, given a $k$-codimensional subspace generated by $(\pm1)$ vectors, the probability of a uniformly selected $(\pm1)$ vector to fall into this subspace is $2^{-k}$.
Then denoting the codimension of the subspace generated by the rows $1,2,\ldots,i-1,i+1,\ldots,n$ by $C$, the above two observations give
$$ \mathbb{P}([\cup\{E_a: p(a)\leq p_0\}]\cap L_i\,|\,C=k) \leq \begin{cases} \mathbb{P}(\cup\{E_a: p(a)\leq p_0\}\,|\,C=k)=0, & \text{if $p_0<2^{-k}$} \\ \mathbb{P}(L_i\,|\,C=k)=2^{-k}\leq p_0, & \text{if $p_0\geq2^{-k}$.} \end{cases} $$