I am going through an introductory textbook on optimization where the following is said :
"Optimization within a subspace or linear variety can often be reformulated as unconstrained optimization, and normally optimization within a subset that is neither subspace nor a linear variety cannot be formulated as a unconstrained problem."
What is the meaning of this statement ?
Suppose that $V$ is a finite dimensional vector space and that $W\subseteq V$ is a subspace or a linear variety (that is a translated subspace) of $V$. For simplicity lets assume that $V=\mathbb{R}^n$. Then, we can re-write any $x\in W$ as
$$x=Ty+a,$$
where $y\in\mathbb{R}^m$, $a\in\mathbb{R^n}$, the columns of $T$ span the subpsace $W-\{a\}:=\{x-a:x\in W\}$ (note that if $W$ is a subspace, then $a=0$) and $m=\dim(W-\{a\})$.
Suppose that $f:\mathbb{R}^n\rightarrow\mathbb{R}$. With the above we can re-write the constrained optimisation problem
$$p=\min_{x\in W}f(x)$$
as the unconstrained problem
$$p=\min_{y} f(Ty+a).$$
In addition, if we find a $y^*$ such that $p=f(Ty^*+a)$, then we can recover an $x$ such that $f(x)=p$, namely $x^*=Ty^*+a$.
In general--that is, if $W$ is not a subspace or linear variety--it might not be possible to do the above. The trick is to be be able to find a bijective mapping $g$ from $\mathbb{R}^m$ to the feasible set $W$, where $m\leq n$ (above we had $g(y)=Ty+a$).
This said, there other ways of finding (or at least, approximating) the solution of a constrained problem by solving unconstrained problems (for example, see here--this is not a very good reference, maybe someone can suggest a better one?).