I would like to estimate a matrix $S$ by solving this optimization problem \begin{align} &\min\limits_{s} f(S) \\ &\text{subject to }\sum_{i,j\neq\{1,1\}}s_{i,j}=1 \\ &\text{subject to }s_{1,1}=-0.5,\end{align} where I would like to have all the entries $s_{i,j}$ except $s_{1,1}$ of my solution matrix S sum up to 1 and $s_{1,1}=-0.5$.
So the way how I solved the problem is by the following iterative equations at every iteration $k+1$: \begin{align} &s_{i,j}^{(k+1)}= s_{i,j}^{k} - \eta\nabla f\big(s_{i,j}^{(k)}\big)\\ &s^{(k+1)}_{i,j\neq{\{1,1\}}} = \frac{s^{(k+1)}_{i,j\neq{\{1,1\}}}}{\sum_{i,j\neq\{1,1\}}s^{(k+1)}_{i,j}} \\ &s^{(k+1)}_{1,1}=-0.5\end{align}
The main idea is to solve using the gradient decent then projecting the updated entries back into the feasible set by reinforcing the constraints at each iteration $k+1$.
My final solution is what I need but I would like to know if and when what I am doing is correct and is there a proof for that. I know that this may depend on the function $f$ so that would be great to explain under which conditions the way how I am solving my problem is correct.