here is a question about Compressed sensing. Let us denote the $k$-sparse signal $x\in \mathbb{R}^n$, measurement/sensing matrix $A\in \mathbb{R}^{m\times n}$ and the measurement $y = Ax \in \mathbb{R}^m$.
In the noiseless situation, we know that the signal $x$ could be perfectly reconstructed if $A$ satisfies the restricted isometry property. Denote the reconstructed signal as $x'$. Now I am more interested in the reconstructed performance (i.e. mean square error between $x$ and $x'$), instead of uniqueness or perfect reconstruction. It is intuitive that with more number of measurements $m$ the reconstruction performance would become better. But I could not find references for this idea. Is there any mathematical proof for this relationship?
Could anyone help me? Thanks in advance!
There is a wonderful book called " A Mathematical Introduction to Compressive Sensing" by Dr.Simon Foucart and Dr.Holger Rauhut. You can find the answer in Chapter 6 from this book.
I can not give you the exact answer since there are lots of mathematical concepts that should be introduced first. But you can find the answer yourself from this book.
By the way, the error really depends on which method you use to construct $x'$, for example l1 minimization, Iterative hard thresholding or Orthogonal Matching Pursuit...... There are lots of method and the errors are different.
Also, we have some constraints on number of measurements m, but sparsity k is more important than m in the error analysis (if I memorize correctly).
In this book, they consider the noisy case, which means $y=Ax+e$. But it doesn't matter, you can simply remove error term and the results still hold.