The maximization problem
I want to solve the problem \begin{equation} \max_{x(\cdot), y(\cdot), u(\cdot)} \int_{t_0}^{T}f(x(t),t)g(t) dt \end{equation} subject to \begin{align} &x(t)t - x(t)^2/2 = y(t) \mbox{ (SC)}\\ % &y(t) \geq 0 \\ &\dot x (t) = u(t) \\ &\dot y(t) = x(t) \\ &u(t)\geq 0 \\ & x(t_0)\geq 0, \quad x(T) \mbox{ free}\\ & y(t_0)\geq 0, \quad y(T) \mbox{ free} \\ & t_0,T >0 \mbox{ fixed} \end{align} where $g(t)>0$ is a probability density function, $y(t)$ is absolutely continuous, and $x(t)$ is piecewise absolutely continuous (with finite jumps). For simplicity, assume $f(x,t)= x$ throughout my question.
At first glance, the problem looks simple because it is linear in the control variable $u$. However, what complicates the problem is that (1) the state variable $x$ may have jumps, and (2) the state equality constraint $x(t)t - g(x(t)) = y(t)$ (SC), which implies $\dot x(t) [x(t)-t] = 0$ after differentiating w.r.t $t$.
I am looking for sufficient conditions under which the solution takes the following form: \begin{equation} x^*(t) = \begin{cases} 0, &\text{ if } t\in[t_0,t_1]\\ 2t_1, &\text{ if } t\in[t_1,t_2]\\ t, &\text{ if } t\in[t_2,T] \end{cases} \end{equation} which has a jump at $t_1$. Any of the three regions could be empty, and $x^*(t)$ is continuous at $t_2$ (i.e., $2t_1=t_2$) if $t_2< T$.
My attempt
Let \begin{equation} H(x,y,u,t) = x g(t) + \lambda[xt - x^2/2 - y] + \mu_y x + \mu_x u \end{equation} By the Pontryagin Maximum Principle, \begin{align} -&\frac{\partial H}{\partial x} = - ( g + \lambda (t-x)+ \mu_y) = \dot \mu_x\\ -&\frac{\partial H}{\partial y} = \lambda = \dot \mu_y\\ &\frac{\partial H}{\partial u } = \mu_x \leq 0, \quad \mu_x u = 0\\ % & \phi\geq 0,\quad [t x - x^2/2]\phi =0 \\ & \mu_y(t_0) \leq 0, \quad \mu_y(t_0)y(t_0) =0 \\ & \mu_x(t_0)\leq 0, \quad \mu_x(t_0) x(t_0)=0 \\ & \mu_y(T) = 0, \quad \mu_x(T)=0. \end{align} Second-order condition requires that $\max_{u} H(x,y,u,t)$ is concave in $(x,y)$, i.e., $\lambda \geq 0$.
Issues with state equality constraint
It is intuitive that if $g(t)$ is decreasing, the solution should look like \begin{equation} x^*(t) = \begin{cases} 2t_0, &\text{ if } t\in[t_0,t_2]\\ t, &\text{ if } t\in[2t_1,T] \end{cases} \end{equation} I propose the following multipliers: \begin{equation} \mu_y (t) = \begin{cases} % -B - \kappa G(t) &\text{ if } t\in [t_0 ,t_1)\\ -\frac{G(t_2)-G(t_0)}{{t_2}-{t_0}} \leq 0 , &\text{ if } t\in [t_0,t_2]\\ - g(t), &\text{ if } t\in (t_2,T) \\ 0, &\text{ if } t=T \end{cases} \end{equation} \begin{equation} \lambda (t) = \begin{cases} 0, &\text{ if } t\in [t_0,t_2]\\ - g'(t), &\text{ if } t\in (t_2,T] \end{cases} \end{equation} \begin{equation} \mu_x ( t) = \begin{cases} % 0 &\text{ if } t\in [t_0 , t_1) \\% - \int_{t_0}^ t f_x f d \tilde t + \kappa t G(t)+ ( t-t_0)B &\text{ if } t\in [t_0 , t_1) \\ - \int_{t_0}^t [ g(t) - \frac{G(t_2)-G(t_0)}{{t_2}-{t_0}} ] d t \leq 0 , &\text{ if } t\in[t_0, t_2) \\ 0, &\text{ if } t\in[t_2,T) \end{cases} \end{equation} Importantly, $g(t)$ decreasing implies $\lambda (t)=-g'(t)\geq 0$, so concavity is satisfied.
Question 1: why can $\mu_y$ be discontinuous at $t_2$ and $T$? Is it because of the state constraint?
I got more confused as I replaced (SC) with its differentiation form $\dot x(t) [x(t)-t] = 0$, following some textbooks. In that case, I don't need $y$ as a state variable anymore, and the Hamiltonian becomes \begin{equation} H(x,u,t) = x g(t) + \gamma(x -t)u + \mu_x u \end{equation} By the Pontryagin Maximum Principle, \begin{align} -&\frac{\partial H}{\partial x} = - ( g + \gamma u ) = \dot \mu_x\\ &\frac{\partial H}{\partial u } = \mu_x - \gamma(x(t)-t) \leq 0, \quad [ \mu_x - \gamma(x(t)-t)]u = 0\\ & \mu_x(t_0)\leq 0, \quad \mu_x(t_0) x(t_0)=0 \\ & \mu_x(T)=0. \end{align} The second-order condition is always satisfied because $\max_u H(x,u,t)$ is linear in $x$. [I don't think linearity causes the issue because the problem persists with more general $f(x,t)$.]
Question 2: Because $x^*(t)=t$ on $(t_2,T)$, we have $u^*=1$, and thus $\gamma (t) = -g(t)$. This is exactly the same as $\mu_y(t)$, but now the concavity no longer requires $g(t)$ to be decreasing. Why is it inconsistent with using (SC)?
I think it is because the state variable is usually required to be absolutely continuous, so the optimal solution does not rule out possible improvements that involve state jumps.
Issues with state jumps
If $g(t)$ is increasing, the solution should look like \begin{equation} x^*(t) = \begin{cases} 0, &\text{ if } t\in[t_0,t_1]\\ 2t_1, &\text{ if } t\in[t_1,T]\\ \end{cases} \end{equation} with a jump at $t_1$. I propose the following multipliers \begin{equation} \mu_y (t) = \begin{cases} -\frac{G(t_1)}{t_1-t_0} &\text{ if } t\in [t_0 ,t_1)\\ - g(t_1), &\text{ if } t\in (t_1,T) \\ 0, &\text{ if } t=T \end{cases},\quad \lambda (t) = 0 \end{equation}
\begin{equation} \mu_x ( t) = \begin{cases} - \int_{t_0}^t (g(t) - \frac{G(t_1)}{t_1-t_0}) \leq 0, &\text{ if } \theta\in [t_0, t_1] \\ - \int_{t_1}^t (g(t) -g(t_1)) \leq 0 , &\text{ if } t\in [ t_1,T] \end{cases} \end{equation}
Question 3: At the jump at $t_1$, there should be jump condition on $\mu_y$ and $\mu_y$. In particular, $\mu_y(t_1^-) = \mu_y(t_1^+)$ because $y(t)$ is continuous, but it is not satisfied. Am I missing something?
Question 4: I am aware of an alternative approach using accumulative Lagrangian, which seems to be able to escape these technical problems. The algebra is much more complicated, but the resulting Lagrangian looks pretty much like what I got. Are the two methods equivalent? If so, why aren't state jumps or state equality constraints an issue there?