For $G_{1}(x) = I_{c}(x), c =\{x|Ax=b\}$, the Proximal operator is:
$$ Prox_{\gamma G_{1}}(x) = Proj_{c}(x) = x + A^{T}(AA^{T})^{-1}(y-Ax) $$
I hope to know, how to derivative this result.
And, for example, if the indicator function contains two variables like,
$$ G_{1}(E,x) = I_{c}(E,x), c =\{E,x|Ax+E=b\} $$
how to computer $Prox_{\gamma G_{1}}(E,x)$?
The general way to compute the proximal to function f is to calculate
$$ Prox_{f}(x) = \arg\min_y f(y) + \frac{1}{2} ||x-y||^2 $$
The indicator function in convex optimization takes 0 inside and $+\infty$ outside. Thus in your case, the constraint must be satisfied to get the min. The problem is equivalent to
$$ Prox_{c}(x) = \arg\min_{y,\ Ay=b} \frac{1}{2} ||x-y||^2 $$.
The fundamental theorem of linear algebra allows for any $x$ a decomposition of the form: $$ x = A^T\alpha + \beta$$ with $\beta\in Ker(A)$ and the two terms are orthogonal. Assume $AA^T$ is invertible, we could get close form for $A^T\alpha = A^T(AA^T)^{-1}Ax$ by multiplying each side by A.
The same kind of decomposition works for $y$, but because of the constraint $Ay=b$, we have $\alpha_y = (AA^T)^{-1}b$ fixed. $\beta_y$ is the only part we could vary. To minimize the norm, $\beta_y$ should take value $\beta$. Thus the minimizer takes the form
$$ y = A^T \alpha_y + \beta = A^T(AA^T)^{-1}b + x - A^T(AA^T)^{-1}Ax $$.
The other problem follows the same idea. I think you have to optimize $E$ in quadratic form. A good exercise to do after understanding the method.