In compressive sensing, what is the optimal dimension of the measurement matrix?

44 Views Asked by At

I have a sparse data vector ${\bf x} \in \mathbb{R}^n$, as part of a very large dataset. Approximately 75% of the data are zeros and I need to identify and work with a lower-dimensional version of this space. I have to do this using the formal framework of compressed sensing and not any other dimensionality reduction method.

I am looking for references on formal requirements for the dimension $m$ of the "measurement" mixing matrix (as in ${\bf y} = M {\bf x}$ where ${\bf y} \in \mathbb{R}^ m$ and $ M \in \mathbb{R}^{ m \times n}$ with $m < n$).

Specifically, I am trying to understand the relationship between the dimension of $M$ (the minimum $m$ required for recovery of the original signal $\bf x$) and sparsity measure of the original signal $\bf x$. Any references or insight would be very helpful.

For additional clarity: My target $M$ is optimal as in having the smallest possible $m$ in $M \in \mathbb{R}^{ m \times n}$ that leads to perfect recovery of the original signal $\bf x$ through the formal compressive sensing signal reconstruction framework.