When I consider a general minimization problem \begin{align} &\min f_0(\textbf{x})\\ &s.t \quad g_i(\textbf{x}) \le 0 \qquad i=1,\dots ,p\\ & \qquad h_i(\textbf{x}) = 0 \qquad i=1,\dots ,m. \end{align}
and consider the primal function $L(\textbf{x},\lambda,\nu)$, I would expect that the dual problem becomes something like this
\begin{align} &\max g(\lambda,\nu)\\ &s.t \quad \lambda \le 0 \end{align}
(Since $\lambda$ is a vector, the inequality is intended componentwise), and $g$ is the dual function $g(\lambda,\nu) := \inf_{\textbf{x}} (L(\textbf{x},\lambda,\nu))$. I would expect this because I have a minimization problem with a $\le$ inequality constraints.
Why on Convex Optimization by Stephen Boyd, even if he clearly refers to a general problem of the form I put at the beginning, he says that in the dual problem we have $\lambda \ge 0$??
A cornerstone in the construction of the dual problem is the weak duality theorem. Let $f, q$ be the primal and dual problems (respectively). Then by weak duality $q^*\leq f^*$. To prove week duality, we define the primal set $$S=\{\mathbf{x}\in X: g_i(\mathbf{x})\leq 0, h_j(\mathbf{x})=0, i=1,\dots,m,\; j=1,\dots, p\}.$$ Then for any $(\lambda,\mu)\in\mathbb{R}^m_+\times\mathbb{R}^p$ we have \begin{aligned} q(\lambda,\mu)&=\min_{\mathbf{x}\in X} L(\mathbf{x},\lambda,\mu)\\ &\leq \min_{\mathbf{x}\in S} L(\mathbf{x},\lambda,\mu) \\ &=\min_{\mathbf{x}\in S} \Big[f(\mathbf{x}) +\sum_{i=1}^m\lambda_i g_i(\mathbf{x})+\sum_{j=1}^p\mu_j h_j(\mathbf{x})\Big]\\ &\leq \min_{\mathbf{x}\in S} f(\mathbf{x}) \end{aligned} where the last inequality follows from the fact that $\lambda_i\geq 0$ and $g_i(\mathbf{x})\leq 0$. If you change this, then you don't have weak duality anymore.