One solution for eliminating the equality constraint from optimisation problems is employing a typical matrix $F$ which its range space is the null space of the matrix used in equality constraint as follows:
The original problem, which is a convex problem: $$ min f_0(x) \\ s.t. f_i(x) \le 0, i =1,...,m \\ Ax = b $$
and the eliminated equality constraint problem:
$$ min f_0(Fz+x_0) \\ s.t. f_i(Fz+x_0) \le 0, i=1,...,m $$
What I've done is that after searching alot I found that I should use the SVD decomposition of the matrix A in equality which is $A=U \Sigma V^T$ and multiply it with $Fz$, and this should be equal to zero:
$$ (A)Fz=(U \Sigma V^T)Fz=0 $$
but I cannot find a property which satisfies the above equation. Does anyone have any opinion?
I have to point out I've seen Eliminating equality constains, but it's just an explanation of employing this technique, and it does not find the matrix $F$.
If we have the SVD of $A$, then the following approach is possible. Partition the matrices $U,\Sigma,V$ so that $$ \Sigma = \pmatrix{\Sigma_0 & 0\\0 & 0}, \quad V = \pmatrix{V_1 & V_2} $$ where $\Sigma_0$ is $r \times r$ with non-zero diagonal entries, and $V_1$ has $r$ columns. The nullspace of $A$ is spanned by the columns of $V_2$. Thus, $F= V_2$ is a matrix whose range is the nullspace of $A$.
In response to the comment:
Suppose that $A$ has size $m \times n$. Recall that $U,V$ are orthogonal (and therefore invertible) matrices. So, we have $$ Az = 0 \iff U(\Sigma V^T)z = 0 \iff \Sigma (V^Tz) = 0. $$ We see that $z$ is in the nullspace of $A$ if and only if $V^Tz$ is in the nullspace of $\Sigma$. However, if we break the column-vector $w$ into $w = w_1,w_2$ (with $w_1$ of length $r$), then we find (with block-matrix multiplication) $$ \Sigma w = \pmatrix{\Sigma_0 & 0\\ 0&0} \pmatrix{w_1\\w_2} = \pmatrix{\Sigma_0 w_1\\ 0}. $$ In other words, the nullspace of $\Sigma_0$ is spanned by the columns $e_{r+1},\dots,e_n$ (where $e_k$ is the $k$th standard basis vector of $\Bbb R^n$, i.e. the $k$th column of the $n \times n$ identity matrix). This means that the vectors $z_{r+1},\dots,z_n$ that solve $V^Tz_k = e_k$ form a basis of the nullspace. With that, we see that $$ V^Tz_k = e_k \implies z_k = Ve_k, $$ which is to say that $z_k$ is the $k$th column of $V$. So, the columns $V e_{r+1},\dots,V e_n$ of $V$, i.e. the columns of $V_2$, form a basis of the nullspace.