For the optimization problem defined as $p^* = $ min$_{x \in \mathbb{R}^n} f_0(x)$ with constraints $f_i(x) \le 0, i = 1, ..., n$, can the problem be expressed as one without constraints?
I think the answer is yes. If we get rid of the constraints, then those conditions would have to be "added on" to the objective function, right? So, if we have some constraint $f_i(x) \le 2$, if we were to get rid of it, that condition would somehow have to become a part of $f_0(x)$.
Am I thinking about this correctly? If not, how should I be going about solving this problem? I can kind of picture how it can be true, but I just can't seem to prove it mathematically. Thank you.
Yes, this is in fact what many numerical optimization algorithms do. So called penalty methods where a penalty function is chosen that can be added to the function to optimize and which worsens the objective value if we are far from fulfilling the constraint.
For inequalities like the one you show, barrier functions have become popular the last 10 years or something like that.
Logarithms are a popular choice for barriers if you intend on staying on feasible points while searching for optimum. Such methods which insists on staying in feasible sets of definition space are often called interior-point methods.