Maximisation of a functional under integral constraint

368 Views Asked by At

I need your help. I have got a maximisation problem that seems to be "easy" at first sight, and I have also got an intuitive solution for the problem, but I am not able to translate my intuitions into rigorous (first order) conditions.

The problem is to maximise the functional

$$J\left[ y \right] = \int_a^b {u(t)y(t)dt}$$

s.c. that

$$\int_a^b {y(t)dt = 1} $$ and $$y:\left[ {a,b} \right] \to \left[ {0, + \infty } \right)$$

where $$u:\left[ {a,b} \right] \to \mathbb{R} $$ is a given continuous function.

Intuitively, we can imagine that ${y(t)}$ is a sort of boost (like an accelerator), whose total amount is given and normalised to 1. Therefore, my problem is equivalent to decide how to distribute the accelerator over the interval $\left[ {a,b} \right]$, so as to maximise $J\left[ y \right]$.

If ${u(t)}$ is strictly concave, so that it reaches a maximum at ${t^*} = \mathop {\arg \max }\limits_t u(t)$, then it is better to concentrate all boost on ${t^*}$. In this case $${y^*} = \mathop {\arg \max }\limits_y J\left[ y \right] = {\delta _{{t^*}}}(t) $$ where ${\delta _{{t^*}}}(t)$ is the Dirac delta centred at ${t^*}$.

If ${u(t)}$ reaches its maximum on a countable set ${\left\{ {t_i^*} \right\}_{i \in \mathbb{N}}} \subset \left[ {a,b} \right]$, then $${y^*} = \sum\limits_i {{w_i}{\delta _{t_i^*}}(t)}$$ (a convex combination of Dirac delta functions), where ${{w_i}}$ are arbitrary weights.

If ${u(t)}$ reaches its maximum on an uncountable set ${T^*} = \left\{ {{t^*} \in \left[ {a,b} \right]|{t^*} = \mathop {\arg \max }\limits_t u(t)} \right\}$, then any arbitrary ${y(t)}$ with support over ${T^*}$ will maximise the functional.

As I said, the point is that I cannot translate the intuitions into rigorous (first order) conditions, in the form of a differential equation. Euler-Lagrange equation does not help (or, at least, I am not able to make it help).

Any hint? Do you think it is possible to map the functional space of ${y(t)}$ (or $J\left[ y \right]$) into another more tractable space? (I was wondering if some integral transformation may help...).

Thank you very much for your help.

1

There are 1 best solutions below

3
On

The problem does not have typically a solution in any of the usual function spaces, as there is nothing better than to concentrate all mass of $y$ at a maximum of $u$. In spaces of continuous or differentiable functions, you can choose a sequence of functions that approximates a Dirac delta, and then you see that the supremum of $J[y]$ over all continuous functions is $\max u$. However, if $\int_0^1 u(t) y(t)dt=\max u$, then $\int(\max u-u(t))y(t) dt =0$ so (using $y(t)\ge 0$ and $u(t)\le \max u$ and continuity) we have $y(t)(\max u - u(t))=0$ for every $t\in[0,1]$. Unless $u(t)=\max u$ on some interval, it follows that $y(t)=0$, a contradiction to $\int_0^1 y(t)dt=1$. So there is typically no maximiser in continuous functions.

In $L^p$ spaces, the situation is essentially the same: unless $u$ is equal to its maximum on a set of positive measure, any maximiser will have to be zero almost everywhere, a contradiction to the unit mass constraint.

Your choices are either to do what you did above, which is to relax your function spaces and consider the space of probability measures instead (which includes the Dirac deltas) or to change your functional to penalise functions with large derivatives, for example by choosing $$J_a[y]=\int_0^1 u(t)y(t)-a(y'(t))^2 dt $$ for some $a>0$. Then you can use Euler-Lagrange equations (with a Lagrange multiplier for your isoperimetric constraint $\int_0^1 y(t)dt=1$) to find a maximiser for any given $a>0$, and then choose an $a$ that makes you like the solution you obtain.