I am trying to solve some convex optimization problems.
$$ C = \{x \in R^n \mid x^TAx + b^Tx + c \leq 0\} $$
where $A\in S^n, b\in R^n$ and $c\in R$
Question is to show that the intersection of $C$ and the hyperplane defined by $g^Tx + h = 0$ with $g\neq 0$ is convex iff $A + \lambda gg^T \succeq 0$ for some $\lambda \in R$
I am having a hard time to geometrically interpret that. Like how $A + \lambda gg^T$ can become positive semi-definite. I have solution to it here (Page # 8 Exercise 2.10 b)
I am thinking it like using toy example in $R^2$ let $$A = \left[ \begin{array}{ccc} -1 & 0 \\ 0 & -1 \end{array} \right], g = \left[ \begin{array}{ccc} 2 & 2 \end{array} \right]^T$$
Though I did some basic sum of $A + \lambda gg^T$ and it makes it positive semi-definite (psd) but I want to understand geometric intuition behind it. Also I understood some part of solution where it is taking intersection of $C$ and $H$, but then I have no clue how he shifted it to sum of matrices.
One thing I know from previous exercise that if a matrix is psd then set generated by its quadratic inequality would be convex. It would be great help if:
1) Someone could help me in understanding how summation of two matrices which are not necessarily psd, can make it psd?
2) "How" the information from eigendecomposition of $A$ can be used here?
3) Also I noticed that other matrix has linearly dependent rows so is it necessary that it should be linearly dependent or the other matrix could make the sum psd.
P.S. This is not a homework. This is my understanding to this question that they want us to know that sum of matrices can make the matrix psd and the set generated by it, convex. Other intuition behind this exercise would be great help as well. Thanks.
Suppose $A$ is a real symmetric matrix that is not positive definite, so there is a vector $v$ with $v^T A v < 0$. If we take $B = A + \lambda g g^T$, we have $v^T B v = v^T A v + \lambda (g^T v)^2$, and this is nonnegative as long as $g^T v \ne 0$ and $\lambda$ is large enough.
If $A$ has just a single, simple negative eigenvalue $\mu$ with eigenvector $v_\mu$, all other eigenvalues being nonnegative, we can represent any vector $x$ as $x = c_0 v_\mu + w$, where $w$ is in the span of the eigenvectors for nonnegative eigenvalues, and $w^T A w \ge 0$ while $v_\mu^T A w = 0$. Now
$$ x^T B x = c_0^2 v_\mu^T A v_\mu + w^T A w + \lambda c_0^2 (g^T v_\mu)^2 + \lambda (g^T w )^2 \ge c_0^2 (\mu v_\mu^T v_\mu + \lambda (g^T v_\mu)^2)$$ and this will be nonnegative as long as $g^T v_\mu \ne 0$ and $\lambda$ is large enough.
On the other hand, this won't work if $A$ has two negative eigenvalues (or a negative eigenvalue of multiplicity $> 1$), because then whatever $g$ you choose there will be a nonzero $v$ orthogonal to $g$ in the space spanned by the eigenvectors for negative eigenvalues; you'll then have $v^T B v = v^T A v < 0$.
Your "toy example" does not work, because the negative eigenvalue $-1$ has multiplicity $2$. If you take the given $g$, $$ A + \lambda g g^T = \pmatrix{4 \lambda - 1 & 4\lambda\cr 4 \lambda & 4 \lambda - 1\cr}$$ is not psd, because it still has eigenvector $(1,-1)^T$ (a vector orthogonal to $g$) for eigenvalue $-1$.