The following text derived from book Convex Optimization, by Boyd, page 143.
For a convex problem the equality constraints must be linear, i.e., of the form $Ax = b$. In this case they can be eliminated by finding a particular solution $x_0$ of $Ax = b$, and a matrix $F$ whose range is the nullspace of $A$, which results in the problem:
\begin{equation*} \begin{aligned} & \underset{z}{\text{minimize}} & & f_0(Fz + x_0) \\ & \text{subject to} & & f_i(Fz + x_0) \leq 0, \; i = 1, \ldots, m. \end{aligned} \end{equation*}
my question:
Which is the subspace logic in $Fz+x_0$ that makes this problem formulation equivalent to a convex optimization problem of the standard form with the affine equality constraints?
Thank you!
This has nothing to do with the optimization problem as such, it is just a way of parametrising the set $S = \{ x | Ax=b\}$.
Suppose ${\cal R} F = \ker A$ and $x_0 \in S$. Since $x_0 \in S$ we have $Ax_0 = b$.
Then $x \in S$ iff $Ax=b$ iff $Ax = A x_0$ iff $x-x_0 \in \ker A$ iff $x-x_0 \in {\cal R}F$ iff $x \in \{ x_0+F y \}_y$.
That is, $S = \{ x_0+F y \}_y$ (that is all points of the form $x_0+Fy$ for some $y$ in the domain of $F$).
Hence if we have some problem ${\cal P}: \ \min \{ f_0(x) | f_i(x) \le 0, Ax=b \}$, it is equivalent to solving ${\cal P}': \ \min \{ f_0(x_0+Fy) | f_i(x_0+Fy) \le 0 \}$, the difference being that the constraint $x \in S$ is implicitly enforced in ${\cal P}'$.