I have two functions $f(x,y)$ and $g(x,y)$. I want to minimize the sum of these functions w.r.t $x,y \in (0,1)$. I know that for fixed values of $x$, $f(.,y)$ is a decreasing function while $g(.,y)$ is an increasing function. Hence, I think there must be some optimal point of $y$ with respect to fixed values of $x$. In order to optimize the sum with respect to $x,y$ i follow the following steps.
1- Fix a value of $x_i\in(0,1)$.
2- Optimize the function with respect to $y$ for a fixed $x$ and store the value as $opt(x_i)$ and the optimal value of $y$ as $y(x_i)$.
3- repeat step 1 and 2 until all the values in $(0,1)$ are used.
4- At the end find the minimum among all $opt(x_i)'s$ and use $x_i,y(x_i)$ as optimal values for $x,y$.
I want to know if this strategy is correct?
If you're just looking to find the extreme value of the function, then just as in the one-dimensional case you can differentiate to find the critical points. However, two-dimensional calculus is just a tiny bit trickier than the one-dimensional version.
That said, if for a given $x$, $g_x(y) = f(x, y)$ is decreasing, while for a given $y$, $h_y(x) = f(x, y)$ is increasing, then it's pretty simple to state that you can always get a larger value of the function by sending $x$ towards 0 and $y$ towards 1. It's only if there are places where the function flattens out that you've got a chance of hitting a maximum or minimum value within the region.