I am considering the Sylvester equation $$AX + XB = C$$ Now, I am aware that there exists a uniqueness criterion: if $\sigma(A)\cap\sigma(-B)=\emptyset$, there exists a unique solution to this equation for any choice of $C$.
However, when considering the homogeneous version $$AX + XB = 0$$ we obtain an extra property: the equation clearly becomes linear, any multiple of a solution $X_0$ also satisfies the equation since the scalar factor drops out. This is where I become confused. Together with the uniqueness statement this seems to imply that the only possible solution of the homogeneous Sylvester equation (under the conditions of the uniqueness theorem) is the zero solution $X=0$.
This conclusion seems too restricting to me (and making me doubt my intelligence), so I was wondering where I made a mistake.
First consider the special case where $A$ is diagonal with diagonal entries $a_{i}$ and $B$ is diagonal with diagonal entries $b_{i}$. Then $(AX + BX)_{ij} = (b_{i} + a_{j})X_{ij}$, so if $X$ is nonzero and $AX + XB = 0$, then there are some $i$ and $j$ for which $b_{i} = -a_{j}$. On the other hand, if diagonal $A$ and $B$ have no entries in common, then $a_{i} + b_{j} \neq 0$ for all $i$ and $j$, so $AX + XB = 0$ implies $X = 0$.
Now suppose that $A$ and $B$ are symmetric $n \times n$ matrices. Let $M$ be the vector space of $n \times n$ matrices. Define linear maps $\alpha:M \to M$ and $\beta:M \to M$ by $\alpha(X) = AX$ and $\beta(X) = XB$. Because $A$ and $B$ are symmetric, $\alpha$ and $\beta$ are self-adjoint linear maps (with respect to the inner product $\operatorname{tr} X^{t}Y$ on $M$) and they commute as linear maps, so they are simultaneously diagonalizable. Each of $A$ and $B$ is diagonalizable. Let $u_{1}, \dots, u_{n}$ be eigenvectors of $A$ with eigenvalues $a_{1}, \dots, a_{n}$, and let $v_{1}, \dots, v_{n}$ be eigenvectors of $B$ with eigenvalues $b_{1}, \dots, b_{n}$. The $n^{2}$ matrices $u_{i}v_{j}^{t}$ are simultaneous eigenvectors of $\alpha$ and $\beta$, for $\alpha(u_{i}v_{j}^{t}) = Au_{i}v_{j}^{t} = a_{i}u_{i}v_{j}^{t}$ and $\beta(u_{i}v_{j}^{t}) = u_{i}v_{j}^{t}B = u_{i}(Bv_{j})^{t} = b_{i}u_{i}v_{j}^{t}$. Hence $(\alpha + \beta)(u_{i}v_{j}^{t}) = (a_{i} + b_{j})u_{i}v_{j}^{t}$. This diagonalizes $\alpha + \beta$ and shows that it is invertible if and only if $a_{i} + b_{j} \neq 0$ for all $i$ and $j$. Thus if $A$ and $B$ have no eigenvalue in common, $AX + XB = (\alpha + \beta)(X) = 0$ if and only if $X = 0$ (more generally $AX + BX = C$ has a unique solution).
The same argument can be adapted to the general case as follows. If $\lambda$ is an eigenvalue of $\alpha$, then it is an eigenvalue of $A$, for $\lambda X = \alpha(X) = AX$ implies that a nonzero column of $X$ is an eigenvector of $A$ with eigenvalue $\lambda$. Similarly the spectrum of $\beta$ is contained in that of $B$. The linear operators $\alpha$ and $\beta$ commute always, and commuting linear operators are simultaneously triangularizable (this is the key point, perhaps not so commonly explained in introductory courses). It follows that the spectrum of $\alpha + \beta$ is contained in the sum of the spectra of $\alpha$ and $\beta$, which is contained in the sum of the spectra of $A$ and $B$. The assumption that the spectra of $A$ does not include the negatives of the spectra of $B$ implies that $\alpha + \beta$ is invertible, which is the original claim. On the other hand, if the spectra of $A$ does include the negative $-b$ of some element $b$ of the spectra of $B$, let $u$ and $v$ be eigenvectors of $A$ and $B$ with eigenvalues $-b$ and $b$. Then $Auv^{t} + uv^{t}B = -buv^{t} + buv^{t} =0$. The same argument can be adapted to treat the general case.