I have $\vec{C} = G(\vec{\beta})$ by solving a system of ODE numerically. Thanks for the help of Robert the ODE can be found in this link: Solving a system of ODE
Also $\vec{\beta}$ should satisfy $$A{\vec{\beta}}\le f(\vec{\beta}, \vec{C})$$ and $$\max 19\beta_1+0.5\beta_2+16\beta_3.$$ where $A$ is a given matrix and $f$ is some given function.
I am thinking of solving this process using iteration. I have a initial approximation $\vec{\beta^0}$, then for $k=1,2,3...$ solve Part $1$ using $\vec{C^k} = G(\vec{\beta^{k-1}})$ then solving part $2$ optimization using $$A{\vec{\beta^{k+1}}}\le f(\vec{\beta^k}, \vec{C^k})$$ and $$\max 19\beta_1^{k+1}+0.5\beta_2^{k+1}+16\beta_3^{k+1}.$$
But I am worried this step will not converge as $k\to\infty$. My questions is if this method will converge? if it is not, how to solve the optimization/ODE system to make it converge to the true solution?
Any help is appreciated! Many thanks!
The task standing has unknown significant parameters. The quantity and localization of maxima are unknown too. Also, the optimization methods are not detalized. In such conditions, the convergence of iterations cannot be guaranteed.
This situation can be improved if to make optimization as accurate as possible.
Let us consider the possible ways for that.
$\color{brown}{\textbf{The choice of initial point.}}$
This mean that the constraints $A\vec\beta\le f(\vec\beta, C)$ can be used in the rigorous variant $$A\vec\beta = f\left(\vec\beta, \vec C\right).\tag1$$ Thus, the task is to maximize the scalar production $\vec w \vec \beta,$ where $$\vec w=\begin{pmatrix}19\\0.5\\16\end{pmatrix},\tag2$$ under the constraint $(1).$
This approach allows to localize the initial points near the possible maxima.
$\color{brown}{\textbf{Iterations.}}$