For instance we want to find the minimum value of $\sum\limits_{i=1}^n |x_i|$ subject to the constraints $x_i \geq 0$. Then the Lagrangian of this problem is
$$L (\mathbf{x},\mathbf{\lambda}) = \sum\limits_{i=1}^n |x_i|-\sum\limits_{i=1}^n \lambda_ix_i$$
Using simple arguments we can find that the Lagrange dual function $g(\lambda)=0$ and furthermore we must have $-1\leq \lambda_i \leq 1$ for all $i$,this we have found the dual problem of the original problem.
However the form of this problem makes me a little bit worried. Also in this problem we cannot solve the variable $\lambda$ from the dual problem and then obtain the value of $x$ that minimizes the original expression, which is quite different from the case I have met before(such as some optimization problems of quadratic functions).
I wonder whether there are mistakes in my arguments, or that this is indeed the case?
The dual function is $g(\lambda)=\inf_{x} \|x\|_1 - \lambda^T x$, where $\lambda \ge 0$. I'm not sure I follow your concern, perhaps the following will help?
By choosing $x=0$ we see that $g(\lambda) \le 0$ for all $\lambda \ge 0$.
If any $\lambda_k >1$ then $g(\lambda) = -\infty$.
If and $\lambda_k =1$ then any $x_k \ge 0$ will have the same cost.
It is clear that $g$ is maximised when $\lambda_k \in [0,1]$ and the maximum value is zero.
The maximum value of zero is always attained with $x=0$, and if any $\lambda_k =1$ then one can choose any non negative value for the corresponding $x_k$.
Note in the above cases, I am working from an optimising $\lambda $ and determining the corresponding $x$ (which is not necessarily unique).