Optimization of the sum of a convex and a non-convex function?

1.3k Views Asked by At

I have two quadratic functions $f(x)$ and $g(x)$ which are convex (PSD Hessian)and non-convex respectively and $x\in R^{n}$.

Adding them together as $(1-\lambda)f(x)+\lambda g(x)$ for $\lambda=\lambda^*$ their sum will become convex (PSD Hessian). So i can optimized it using quadratic programing. $$x^*=\underset{x}{argmin} (1-\lambda^*)f(x)+\lambda^* g(x)$$ $$s.t. ~~x_i \ge 0$$

Function $g(x)$ has a saddle point minimum in $x=0$ and is also minimum at the boundaries.

Solving the above using QP, results in $x^*$.

Now if i solve the following for comparison: $$\tilde{x}=\underset{x}{argmin} f(x)$$ $$s.t. ~~x_i \ge 0$$ I observe: $f(x^*)>f(\tilde{x})$ and $g(x^*)<g(\tilde{x})$ which shows having $g(x)$ was effective according to the purpose. But how can i justify that knowing $g(x)$ is still a non-convex added part in the fist optimization?!

The non-convexity of $g(x)$ doesn't effect the optimum solution as long as the whole object is convex? or it does?

In fact i know that if both $f$ and $g$ were convex, $\lambda$ could be a trade off between having the result closer to the optimum of one or the other, but when $g$ is not convex, i'm afraid i cannot say something like that for optimum point of $g$!

EDIT:

$f(x)=x^\top Q x+c^\top x$ and $g(x)=x^\top H x$, where $Q$ is symmetric-PD and $H$ is just symmetric, but $(Q+\lambda^*H)$ is PSD.