I am confused.
A Game is, generally, defined by: $\mathcal{G}=(\mathcal{P}, \mathcal{A}, \mathcal{U})$ where $\mathcal{P}$ is the set of players, $\mathcal{A}$ is the set of actions $\mathcal{U}$ is the set of payoffs.
If the set of actions is composed of $m$ sets, i.e., for $m=2$, $\mathcal{A}=\mathcal{F}\times\mathcal{O}$ where $\mathcal{F}$ is the set $\{f_1, f_2, \dotsc, f_N\}$ and $\mathcal{O}$ is the interval $(0, O_{\mathrm{max}})$. Each player $\in \mathcal{A}$ must choose a strategy $(f, o)\in\mathcal{F} \times\mathcal{O}$.
What we call this kind of game? Joint game theory? Are these games well studied in the literature? If they do have specific name and they do exist, how to analysis and find their Nash equilibrium? Does anyone know some references?
It's still just "game theory," but your strategies simply have a continuous component.
Sometimes, if our action space is infinite, we call them "continuous games". In your situation, your action space is "half-infinite", in that the actions have a discrete component $f \in \mathcal{F}$ and a continuous component $a \in \mathcal{A}$.
Finding the Nash equilibrium in such a game would require a hybrid optimization approach: essentially identifying minima along "slices" of an action space. Imagine several parallel curves; we might compute the minima along each of these, and then select the smallest of the resulting set. Kind of like constrained optimization of a function from $(x,y) \to z$, but where we restrict the search to discrete values of $x$.