The Lagrangian problem is defined as:
$L(x, {\lambda_k}) = f_0(x) + \sum_{k=1}^m \lambda_k f_k(x)$
where $f_k(x) \leq 0, k \in \{1,m\} $ is a constraint (that can be violated?)
and we are trying to minimize $f_0(x)$, the objective function.
In order to find the saddle point, first L is minimized with respect to x, meaning $f_k(x)$ is minimized with respect to x, including the objective function.
If $\lambda_k$ is allowed to be $\geq0$, how is the trivial solution avoided (setting all $\lambda_k=0$) when then maximizing L with respect to lambda, which is independent of x?