Are there any theorems/sufficient conditions about when the optimal value function of a parametrized optimization problem is monotonic in the parameter? Specifically, are there simple conditions guaranteeing that $V(t) \equiv \max_{x\in X} f(x,t)$ subject to $G(x,t)>= 0$ is monotonic in $t$, where $t\in T$ is a parameter? We can even simplify by letting $f$ and $G$ be linear in $x$.
I can only find stuff about convexity/concavity of $V (\cdot)$ :(
You might like to check out the envelope theorem. A summary is here: https://en.wikipedia.org/wiki/Envelope_theorem
Say $f(x,t)$ has positive partial derivative with respect to $t$. Then, if your feasible set defined by $G(x,t)\geq 0$ is not shrinking as $t$ gets larger, i.e. $G_t(x,t)\geq 0$, then if the mild conditions that are described in the second theorem in the above link are satisfied, your optimal value function will be increasing.
However, if $G_t(x,t)< 0$ , then higher values for $t$ might severely constrain the choices available; thus, you might need to analyze the structure of the problem in more detail to arrive at a conclusion.
Although you are interested in the optimal value function, another tool that might be useful for your work is supermodularity which provides insight into monotonicity of optimal choice correspondence. In case of parameters, this concept is named increasing differences. In a nutshell, a function has increasing differences if $$ \frac{\partial^2 f(x,t)}{\partial x\partial t}\geq 0. $$ In the scenario of increasing differences for $f$, Topkis theorem asserts that the maximizers will have a monotonic structure. You would need to look at the details of this remark as there are some issues that are addressed in the case of multiplicity of maximizers.
This theorem might come in handy if the questions you are dealing with allows you to say something about the value function when monotonicity in optimizers is achieved.