I am to model a problem and I want to employ game theory. The players are network's agents P = $\{1,\cdots,N\}$, the strategies S = $\{Red,Green\}$.
The rules are:
- at the beginning of the game, each player selects a strategy.
- the agents who selected Red can not do any further action in the game.
- the agents who selected green will remain green unless they are Unstable: They have more Red neighbors than green. In this case they convert their strategy to Red, and the game continues until no further action is possible (I.e., all green nodes are surrounded by sufficient number of green nodes)
Is this game theoretic model valid from game theory perspective? is there any restriction on the rules of a game that should be followed?
Your model makes sense as a game, except that it misses one thing: utility functions. For game-theoretic reasoning to make sense, you have to have some sort of function for each player, which describes the utility incurred, for example after each step of the game, or at the end of the game.
In either case, if you add a utility function, it sounds like your game could easily be modeled as a stochastic game (you actually don't even need the stochastic part, your transitions would be deterministic). You could also model it as an extensive-form game, but I think that'd be less useful, unless there are some components to the problem that I am unaware of.