I am working on the proof of the title statement. I have the following proof, $A$ and $B$ being those matrices:
$$\begin{aligned} \mathrm{Ker}(A + B) &= \{X \,\mid \,X^T(A + B)X = 0 \} \\ &= \{X \,\mid \,X^TAX = 0 \} \cap \{X \,\mid \,X^TBX = 0 \} \\ &= \mathrm{Ker}(A) \cap \mathrm{Ker}(B)\end{aligned}$$
Where I have used the fact that for semi definite positive matrices, $\mathrm{Ker} \, A = \{X \,/ \,X^TAX = 0 \}$. Then using rank theorem and dim $A \cap B \leq$ min (dim $A$, dim $B$) gives rank $(A + B)$ $\geq k$.
However looking at the correction (this is problem 2.13 from Boyd's Convex Optimization), the author, assuming rank $(A +B) = r < k$ defines $V$ s.t. $\mathrm{Im} \, V = \mathrm{Ker} (A +B)$ with $V \in \mathcal{M}_{n ,n-r}(\mathbb{R})$. Saying that $V^T(A+B)V = V^TAV + V^TBV = 0$, hence by positive semidefinitiveness of $A$ and $B$, $V^TAV = V^TBV =0$, he claims that this implies that both rank of $A$ and rank of $B$ are less than $r$.
I cannot understand the reason behind this last claim, can someone explain it?
Note that $V^TAV = V^TBV = 0$ implies that the kernels of $A$ and $B$ each has dimension at least $n-r$. By the rank-nullity theorem, this implies that the rank of $A$ and $B$ are at most $r$.
To see that the kernels have dimension at least $n-r$, note that for any vector $x \in \Bbb R^{n-r}$ we have $(Vx)^TA(Vx) = x^T V^T A V x = 0$, hence $A(Vx) = 0$.
In other words, the kernel of $A$ (and the kernel of $B$) contain the image of $V$, which is to say that the kernels of $A$ and $B$ each contain the kernel of $(A + B)$ (a space of dimension $n - r$).