While reading the following two papers:
1) Faster Least Squares Approximation of Drineas et al (https://arxiv.org/pdf/0710.1435.pdf)
2) Blendenpik: supercharging lapack's least-square solver of Avron et al (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.437.2874&rep=rep1&type=pdf)
I've noticed that a classical inverse continuous least square problem \begin{align*} Z = \min_x \Vert Ax - b \Vert_2^2 \end{align*}
can be approximated by using uniformly sampled rows (let's say $k$ rows) of $A$ to generate a matrix $\hat{A}$ and also uniformly sampling elements of vector $b$ to generate \hat{b}, in such a way that by taking: \begin{align*} x_R = arg\min_x \Vert \hat{A}x - \hat{b} \Vert_2^2 \end{align*}
for all $\epsilon > 0$, we have the following bound: \begin{align*} \Vert Ax_R - b \Vert_2^2 \leq (1+\epsilon)Z \end{align*} with probability 1-$\delta$ with $\delta = 0.2$.
I was trying to see if there is some general bound, since the authors only specifie the theorem for the case when $1-\delta = 0.8$.
What would be the algebraic expression for $\delta$ in this case, assuming $k$ as the number of rows to be sampled, with some given $\epsilon$?