I have the following convex optimisation problem that I really do know how to prove is unbounded below.
minimise $\Sigma_{i=1}^{n} \alpha_i - \Sigma_{i=1}^{n} \beta_i$
subject to $y_i[\Sigma_{j=1}^{n}\lambda_j y_j x_i^t x_j - b] = \beta_i - \alpha_i$
$0 \le \lambda_j \le 1, \alpha_i \ge 0, \beta_i \ge 0, \space \space i,j=1,...,n $
The inputs to this problem are pairs of $(x_i, y_i)$ which $x_i \in R^d, \space \space y_i \in \{1, -1\}$. The variables of this problem are $\alpha_i, \beta_i, \lambda_i, b$ where $i, j = 1, ..., j$.
I should mention that I've seen Prove this function is unbounded below and more specifically the first and second paragraphs of the answer which explain the conditions to satisfy in order to show a function is unbounded below, but about my case, I can not handle it. Can anyone help me?
I should also mention the provided problem is a bit like Support Vector Machine in machine learning, but is different!
The primal problem can be rewritten into \begin{equation} \begin{array}{cl} {\min} & {f(\lambda, b)=-\sum_{i=1}^n y_i [\sum_{j=1}^n \lambda_j y_j x_i^{T} x_j - b]} \\ {\text{s.t.}} & {0 \leq \lambda_j \leq 1, \quad j=1,\dots,n} \\ {} & {b \in R.} \end{array} \end{equation} And we can reformulate $f(\lambda, b)$ as \begin{equation} f(\lambda,b) = \left(\sum_{i=1}^n y_i \right) b - \sum_{i=1}^n y_i x_i^T \sum_{j=1}^n \lambda_j y_j x_j \end{equation} Since $\lambda_j$ is bounded, the second term of function $f(\lambda, b)$ is bounded. Thus $f(\lambda, b)$ is unbounded below unless \begin{equation} \left(\sum_{i=1}^n y_i \right) \neq 0. \end{equation} In other words, the optimization problem is unbounded below if $\left(\sum_{i=1}^n y_i \right) \neq 0$. Otherwise, we can attain a finite optimal value.