I'm trying to solve a dynamic programming problem with $n$ (continuous) state variables and only $1$ (continuous) decision variable, which affects all the states. For $n=1$ this is a standard problem and I can compute the FOC easily. For larger $n$, and even for $n=2$, the same process fails and I cannot seems to get a formula for the FOC. Is there an inherent problem with such a problem, or am I doing something wrong? Has anyone encountered a similar example?
To be more precise, here is my model:
$s\in \mathbf{R}^{n}$ is a state vector, $f:\mathbf{R}\times\mathbf{R}^n\to\mathbf{R}$ is the one-stage payoff function and $\beta\in (0,1)$ is the discount factor. The decision maker chooses $x\in [0,1]$ and the next state is determined by some deterministic transition function $g:\mathbf{R}\times\mathbf{R}^n\to\mathbf{R}^n$. Formally, the optimization problem is
$$V(s)=\max\limits_{x\in[0,1]} f(x,s)+\beta V(g(x,s)) $$
And I wish to find the first-order condition that connects $x_t,x_{t+1}$ and $s_{t}$ in terms of $f,g$ and their derivatives.
Thanks!