$$\mathcal{L}(x,\lambda,\nu) = f(x) + \lambda^\top g(x) + \nu^\top h(x)$$
where $\lambda_i \geq 0, g_i(x^*) \leq 0, h_i(x^*) = 0$ for any feasible $x^*$.
To optimise, we set the gradient to zero. (Or solve the dual, I suppose.)
So far, so good, but then how are these constraints enforced?
Assuming your problem is convex, KKT theorem ensures that under certain regularity conditions, $x^*$ is a global minimum iff there exists $\lambda^*\geq 0$ satisfying: \begin{equation} \nabla\mathcal{L}(x^*,\lambda^*)=0 \\ g_i(x^*)\leq0 \qquad \forall i\\ \lambda_i^*g_i(x^*)=0 \qquad \forall i \end{equation} So, for each solution you get by setting the gradient of $\mathcal{L(x,\lambda)} $ to zero, you should check that the other conditions are met.