I have a parameterized optimization problem
\begin{eqnarray} \max_{x\in D(\theta)} f(x,\theta) \end{eqnarray} Here state space $\Theta=(0,1)$; constraint correspondence $D:\Theta\to (0,+\infty)$ is a non-empty, continuous, compact-valued correspondence; the objective function $f(x,\theta)$ is continuous on $(0,1)\times (0,+\infty)$.
Then by the standard Berge's maximum theorem, the solution correspondence \begin{eqnarray} x^*(\theta)=argmax_{x\in D(\theta)} f(x,\theta) \end{eqnarray} is u.h.c and the value function \begin{eqnarray} f^*(\theta)=f(x^*(\theta),\theta) \end{eqnarray} is continuous in $\theta$.
My question is: the continuity of $f^*(\theta)$ on $(0,1)$ doesn't impose enough regularity around the boundary point $0$ and $1$. For example, near $0$, the value function could behave eratically like $\sin(\frac{1}{x})$ near $0$. But in the scenario that I consider, such kind of value function is super unlikely to arise. Unfortunately, due to the mathematical formulation, I cannot extend the optimization problem continuously to include the boundary points.
So, is there a generalization of maximum theorem that impose more structure on the value function? I want the value function to have a limit as $\theta$ approaches the boundary points. Unfortunately again, to require the objective function to have certain (quasi)-concavity/supermodularity is again impossible in my scenario.
I have done my literature search. It looks like that there are several generalizations in the direction of weakening the condition, but so far I haven't seen many generalizations in the direction of enhancing the conclusion.