Suppose we have $\mathbf{A} \in \mathbb{R}^{m\times n}$ and $\mathbf{B} \in \mathbb{R}^{l\times n}$ stored in double precision on a computer someplace.
I would like to compute a basis for $\textrm{kern}(\mathbf{A})\cap\textrm{kern}(\mathbf{B}) = \{\mathbf{x} \in \mathbb{R}^m : \mathbf{A}\mathbf{x} = \mathbf{0}; \mathbf{B}\mathbf{x} = \mathbf{0}\}$.
Here are my thoughts on how to approach the issue (if you're coming from google to see the answer, see Misha's simpler solution below):
The desired kernel is equivalent to the optima of the following unconstrained quadratic program:
$$\mathbf{x} = \textrm{argmin}_{\mathbf{x} \in \mathbb{R}^m} ||\mathbf{A}\mathbf{x}||_2 + ||\mathbf{B}\mathbf{x}||_2$$
This being a standard problem, we know that its solutions are $\mathbf{x}$ such that:
$$ (\mathbf{B}'\mathbf{B} + \mathbf{A}'\mathbf{A})\mathbf{x} = 0 $$
That is, I need simply compute the kernel of $\mathbf{B}'\mathbf{B} + \mathbf{A}'\mathbf{A}$.
An orthonormal basis for this kernel may be computed by getting ride of rows of this matrix which are linearly dependent until I have a set of linearly independent rows, then randomly filling in a basis for $\mathbf{R}^m$, and subsequently performing an orthonormalizing algorithm such as Gram-Schimdt or its numerically preferred alternatives.
I'm posting this question to ask if anyone can confirm that this is the best way of computing the kernel, or if someone can propose a better algorithm from a numerical or computational standpoint.
More straightforward is to stack the matrices on top of each other and solve $$ \begin{bmatrix} \mathbf A \\ \mathbf B\end{bmatrix} \mathbf x = \mathbf 0 $$ which is equivalent to solving $\mathbf A \mathbf x = \mathbf 0$ and $\mathbf B \mathbf x = \mathbf 0$ simultaneously.