Given $A \in \mathbb{R}^{n\times n}$ and $b\in\mathbb{R}^n$, show that
$$ \text{rank} \begin{bmatrix} (sI-A) \quad b \end{bmatrix} = n \quad \forall s \in \mathbb{C} \quad \text{(the PBH criterion for controllability)} $$
if and only if the only $n\times n$ matrix $X$ such that $AX=XA$ and $Xb\equiv0$ is $X \equiv 0$.
I can only go as far as follows (for the $\Rightarrow$ direction): Suppose $q$ is a left eigenvector of $A$, i.e., $qA=\lambda q\:$ for some $\lambda$. From the PBH rank test, there is no left eigenvector of $A$ that is orthogonal to $b$, i.e., $qb\equiv0$. Now, given an $X$ such that $AX=XA$ and $Xb\equiv0$. Left multiply them by $q$ gives $qAX=\lambda qX=qXA$ and $qXb\equiv0$, which shows that $qX$ is also a left eigenvector of $A$ with the same eigenvalue and $(qX)b\equiv0$. So by the PBH rank test, we must have $qX\equiv0$. This holds for all left eigenvectors of $A$. If $A$ is diagonalizable, then we can find a set of left eigenvectors that forms a basis for $\mathbb{R}^n$, and thus $X\equiv0$. However, A may not be diagonalizable.
Suppose that $\text{rank} \begin{bmatrix} (sI-A) \quad b \end{bmatrix} = n \quad \forall s \in \mathbb{C}$ fails to hold. That is, there exists a $\lambda \in \Bbb C$ for which $$ \text{rank} \begin{bmatrix} (\lambda I-A) \quad b \end{bmatrix} < n. $$ It follows that there exists a $q \neq 0$ that solves the equation $q \begin{bmatrix} (\lambda I-A) \quad b \end{bmatrix} = 0$. Multiplying this through shows us that $$ 0 = q \begin{bmatrix} (\lambda I-A) & b \end{bmatrix} = \begin{bmatrix} q(\lambda I-A) \quad qb \end{bmatrix}. $$ That is, we have $qA = \lambda q$ and $qb = 0$. Now, let $p$ be a non-zero right eigenvector of $A$ associated with $\lambda$. Let $X$ denote the $n \times n$ matrix $X = pq.$ We find that $$ AX = A(pq) = (Ap)q = \lambda pq, \quad XA = pqA = p(qA) = \lambda pq. $$ Moreover, $Xb = p(qb) = 0$. By contrapositive, if the only $X$ that satisfies $AX = XA$ and $Xb = 0$ is $X = 0$, then $\text{rank} \begin{bmatrix} (sI-A) \quad b \end{bmatrix} = n \quad \forall s \in \mathbb{C}$.
Conversely, suppose that there exists a non-zero matrix $X$ for which $AX = XA$ and $Xb = 0$. We see that the row-space of $X$ is an $A^T$-invariant subspace, which means that there necessarily exists a row-vector $v$ such that $vX$ is a left-eigenvector of $A$.
So, let a row vector $v$ and $\lambda \in \Bbb C$ be such that $vXA = \lambda vX$ with $vX \neq 0$. We see that $$ vX \begin{bmatrix} (\lambda I-A) & b \end{bmatrix} = \begin{bmatrix} (vX)(\lambda I-A) & v(Xb) \end{bmatrix} = \begin{bmatrix} \lambda vX - (vX)A & 0 \end{bmatrix} = 0. $$ That is, $q = vX$ is a non-zero element of the left kernel of $\begin{bmatrix} (\lambda I-A) & b \end{bmatrix}$, which means that $\begin{bmatrix} (\lambda I-A) & b \end{bmatrix}$ fails to have full rank $n$. By contrapositive, if $\text{rank} \begin{bmatrix} (sI-A) \quad b \end{bmatrix} = n \quad \forall s \in \mathbb{C}$, then the only $X$ that satisfies $AX = XA$ and $Xb = 0$ is $X = 0$.
Clarification on the statement, "we see that the row-space of $X$ is an $A^T$-invariant subspace, which means that there necessarily exists a row-vector $v$ such that $vX$ is a left-eigenvector of $A$."
Note that $A^TX^T = (XA)^T = (AX)^T = X^TA^T$. It follows that for every $z \in \Bbb R^n$, $A^T(X^Tz) = X^T(A^Tz)$. That is, for every $w = X^Tz$ in the range of $X^T$ (i.e. $\mathcal R = \{X^Tz : z \in \Bbb R^n\}$), $A^Tw$ is also in the range of $X^T$. So, the restriction $A^T|_{\mathcal R}$ is a map from $\mathcal R$ to $\mathcal R$, which means that it must have an eigenvector. That is, for some $z$, we have $$ A^T(X^Tz) = \lambda (X^Tz) \implies\\ [A^T(X^Tz)]^T = [\lambda (X^Tz)]^T \implies\\ (z^TX)A = \lambda z^TX. $$ $v = z^T$ is the row-vector that we're looking for.