I am familiar with the usual optimal control problem of the form: $$ \min_{u(t)}\int_{t_0}^{t_1}{f(t,x(t),u(t))}dt\\ \text{s.t.}~\dot{x}(t)=g(t,x(t),u(t))\\ \text{given } x(0), t_0, t_1. $$ I am currently modeling a problem, where my control is a function of my state variable, and the state, in turn, depends on the control and its derivative:
$$ \min_{u(t)}\int_{t_0}^{t_1}{f(t,x(t, u(x), u'(x)),u(x))}dt\\ \text{s.t.}~u'(x)\geq 0\\ \text{given } x(0), t_0, t_1. $$
I do have an equation for $x(t,u(x))$, but $u(x)$ is free and to be determined, so this is not a closed form. I wonder if there is a way to reduce this problem (possibly redefining the state variable) to make it tractable in a usual Hamiltonian, or if I need to go to the theory of infinite dimensional controls (because $u$ is a function of $x$ and only indirectly of $t$).
If the control is subject to dynamics then it can be treated as a state. Simply include a brand new control, for instance $v(t)$, and write the new dynamics as:
$$ \dot{x}_a = \begin{bmatrix} \dot{x} \\ \dot{u} \end{bmatrix} = \begin{bmatrix} f(x, u, t) \\ v \end{bmatrix} = f_a(x_a, v, t) $$ with $v \geq 0$.
For $u$ to be a function of $x$ is a matter of including a path constraint of the form: $$ g_a(x_a,t) = u - g(x,t) = 0 $$ Similarly, if $x$ is a function of $u$ the path constraint is: $$ g_a(x_a,t) = x - g(u,t) = 0 $$
if $u$ is a function of $x$ then $x$ cannot be a function of $u$! If this is the case, and the constraints are linearly independent, then the problem is infeasible, as no solutions can satisfy the two constraints simultaneously.
Finally, if $x$ is not subject to dynamics, then $x$ is not a state! In this case simply switch the names of the variables in your second problem statement.