Suppose I have some function $F(x,y) = (x-x_0)^2 + (y-y_0)^2$
The variables $x_0 ,y_0$ are my 'targets' for $x,y$, i.e. I want to determine $x,y$ such that $F(x,y) = 0$
Now, $x$ and $y$ are obtained from numerically solving a system of ordinary differential equations. For different initial conditions $\alpha, \beta$, we get different values for $x,y$.
The aim is then to find $\alpha, \beta$ such that $F = 0$. I had thought some sort of gradient descent algorithm would work, but I can't seem to frame this problem as a gradient descent one.
How would I go about determining $\alpha, \beta$?
-----Clarifications in response to comments----
$(\alpha, \beta)$ are some initial conditions which are related through a transformation to initial conditions on ODEs $\frac{dx}{d\lambda}, \frac{d y}{d \lambda}$ for som parameter $\lambda$. These ODEs can be solved numerially and then evaluated at a particular point $\lambda$, to produce $x,y$. Ultimately I want to determine $\alpha, \beta$ such that $x = x_0$ and $y=y_0$
I hope this is more clear
This is only to get you started.... not actually an answer, since it's not clear what you're asking:
If you want to apply the gradient method, you need to take the partial derivatives.
$$\frac{\partial F}{\partial x} = 2(x-x_0)\partial_x(x-x_0)=2(x-x_0)$$ $$\frac{\partial F}{\partial y} = 2(y-y_0)\partial_y(y-y_0)=2(y-y_0)$$