Using the inverse error function to show a probability is especially small.

55 Views Asked by At

I'm reading through the recent computer science paper [1] and I am stuck on the last line of Corollary 3.2's proof. We are trying to show the probability of an event $A$ is asymptotically small in $n$, $\rm{Pr}[A] = n^{-\omega(1)}$. Let $\mu := \rm{Pr}[A]$ below.

Let $\delta = o(1)$, $\nu \geq n^{-k}$ for some constant k, let $x,y$ be distributed as gaussians $N(0,1)$, and let $\Phi^{-1}:\mathbb R \rightarrow [0,1]$ be the inverse of the gaussian's cdf. Through a series of manipulations we have the following: $$\mathbb E_x[\rm{Pr_z}[\delta x + (1-\delta^2)^{1/2}z \geq -\Phi^{-1}(\nu)]|x \leq \Phi^{-1}(\mu)] \leq n^{-c}.$$

Using Markov's inequality on the random variable $$Y := \rm{Pr}_z[\delta x + (1-\delta^2)^{1/2}z \geq -\Phi^{-1}(\nu)],$$ with probability at least $1/2$, conditioned on $x \leq \Phi^{-1}(\mu)$, we have $$\rm{Pr}_z[\delta x + (1-\delta^2)^{1/2}z \geq -\Phi^{-1}(\nu)] \leq 2n^{-c}.$$ (Implied by $\rm{Pr}[Y > 2\mathbb E[Y]|x \leq \Phi^{-1}(\mu)]< 1/2$ and the first inequality.)

My confusion is the next step. Using the above the authors say because $\delta = o(1)$ and assuming $c$ is large enough, then $x \leq - \omega(\sqrt{\log n})$ implies $\Phi^{-1}(\mu) \leq -\omega(\sqrt{\log n})$ which implies $\mu = n^{-\omega(1)}$. How did they make this last step? My guess would be through the equation $\Phi^{-1}(t) = \sqrt 2 \rm{erf}^{-1}(2t-1)$ and a series expansion, but I haven't been able to manipulate the inequalities in order to do use this equation.

[1] "Pseudo-randomness of Ring-LWE for any Ring and Modulus." - Peikert, Regev, Stephen-Davidowitz, http://ia.cr/2017/258.