I've had some idea I could do something like this:
$${\bf v_o}=\min_{\bf v} \{\|{\bf M(v+r)}\|_2^2 + \epsilon\|{\bf v}\|_2^2\}$$
for a random vector $\bf r$ and then $\bf v$ should point in the direction where $\bf M$ has largest eigenvalue because it is in the negative of that direction the norm will be squeezed the most. Then we can iterate this by setting $\bf r = r+v_o$ and resolving until $\bf v_0$ does not change and we should have a vector in the null space.
But the iteration kind of bugs me. It is inelegant. I would like to try and express and solve it all simultaneously in one big grande equation system. Any ideas?


Problem statement
Let's take a slightly different perspective on this problem and say that we start with the matrix $M\in\mathbb{C}^{m\times n}_{\rho}$, and a data vector $b\in\mathbb{C}^{m}$. This data vector isn't in the null space $\color{red}{\mathcal{N}\left(\mathbf{M}^{*}\right)}$. Finally, we possess the Moore-Penrose pseudoinverse, $\mathbf{M}^{+}$. That is, we have the least squares solution and that unlocks both left and right null spaces.
Building null space vectors
The solution provides access to vectors in both null spaces, $\color{red}{\mathcal{N}\left(\mathbf{M}^{*}\right)}$ (when $\rho < m$), and $\color{red}{\mathcal{N}\left(\mathbf{M}\right)}$ $(\rho < n)$.
The least squares problem $$ x_{LS} = \left\{ x\in\mathbb{C}^{n} \colon \lVert \mathbf{M} x - b \rVert_{2}^{2} \text{ is minimized} \right\} $$ has the general solution $$ x_{LS} = \color{blue}{\mathbf{M}^{+} b} + \color{red}{\left( \mathbf{I}_{n} + \mathbf{M}^{+}\mathbf{M} \right)y}, \qquad y \in \mathbb{C}^{n} $$ Subspace membership is color coded for $\color{blue}{range}$ and $\color{red}{null}$ spaces.
Right null space
The operator which projects an $n-$vector onto the null space $\color{red}{\mathcal{N}\left(\mathbf{M}^{*}\right)}$ is $$ \mathbf{P}_{\color{red}{\mathcal{N}\left( \mathbf{M}\right)}} = \color{red}{\left( \mathbf{I}_{n} + \mathbf{M}^{+}\mathbf{M} \right)} $$ Input a vector $y$ and output the projection of $y$ onto the right null space.
Left null space
The crucial insight here is to resolve the data vector in to range and null space components: $$ b = \color{blue}{b_{\mathcal{R}}} + \color{red}{b_{\mathcal{N}}} $$
The matrix pseudoinverse provides the projection of the data vector on the range space: $$ \color{blue}{b_{\mathcal{R}}} = \color{blue}{\mathbf{M}^{+}b} $$ A null space vector is then $$ \color{red}{b_{\mathcal{N}}} = b - \color{blue}{b_{\mathcal{R}}} = b - \color{blue}{\mathbf{M}^{+}b} $$