I'm considering the inequality-constrained optimization problem of finding $$ x^{\star} = \arg \min_{x} f(x) \;\; \text{s.t.} \;\; h(x) \le 0 $$ which is assumed to have a unique minimizer. The objective $f$ maps $R^{n} \rightarrow R$ and $h$ maps $R^{n} \rightarrow R^m$ capturing multiple constraints. This problem has Lagrangian $L(x, \lambda) = f(x) + \lambda^{T} h(x)$ and the associated dual problem is to find some $$ \lambda^{\star} \in \arg \max_{\lambda} g(\lambda) \;\; \text{s.t.} \;\; \lambda\ge 0 $$ where the dual function is defined $g(\lambda) = \min_{x} L(x, \lambda)$.
Assuming that strong duality holds, the optimal value of the primal problem $p^{\star} = f(x^{\star})$ and that of the dual problem $d^{\star} = g(\lambda^{\star})$ are equal.
QUESTION: If we define $\hat{x}(\lambda) \in \arg \min_{x} L(x, \lambda)$ to be a value of the primal variable $x$ which yields the dual function $g(\lambda) = L(\hat{x}(\lambda), \lambda)$, then do there exist functions $(f, h)$ for which this is not necessarily equal to the solution, i.e. $\hat{x}(\lambda^{\star}) \ne x^{\star}$?
This is entirely possible, and happens all the time. In order to guarantee that the subproblem solution obtained at the dual optimum corresponds to the true primal optimum, you need the dual to be differentiable which is equivalent to the primal problem being strictly convex. In the context of linear programming, this is called the "problem of non-coordinability".
It has been a fairly active area of research actually, see for example http://link.springer.com/article/10.1007%2Fs10107-014-0772-2#/page-1 for a decent exposition.