Consider the optimization problem over $\mathbb{R}^d$:
$$\max_z f(z)$$ $$st \, z \in \mathcal{C}$$
where $\mathcal{C}$ is a convex subset of $\mathbb{R}^d$ and $f$ is some real valued convex function. To form the lagrangian of this optimization problem I would introduce a scalar lagrange multiplier $\lambda <0$ and an indicator function $\mathbb{1}(z \not \in \mathbb{C})$:
$$\mathcal{L}(z, \lambda) = f(z) + \lambda \mathbb{1}(z \not \in \mathbb{C})$$
However the indicator function is nonconvex and is difficult to optimize. Instead I want to introduce a vector valued lagrange multiplier $\nu$ which is restricted to the polar cone of $C$ defined as:
$$\text{polar}(C) = \{ y \,|\, \langle y, x\rangle \leq 0 \, \forall x \in \mathcal{C} \}$$
so that my dual problem becomes:
$$\max_z \min_\nu f(z) - \langle \nu, z\rangle $$ $$st\, \nu \in \text{polar}(C)$$
In which case the minimization can penalize the value of $z$ chosen if it does not satisfy the constraint.
My question is: Is this commonly done? Are there alternative approaches? Do you have any recommendations of references to go through to pick up some of these tricks? I know I could introduce a barrier function instead (similar to interior point methods).