Integral Optimization Problem with inequality constraint

79 Views Asked by At

I would like to find the function $p(x)$ that maximizes the integral

$$I(p) = \int_{0}^{L}p(x)q(x)dx$$

subject to constraints $$\int_{0}^{L}p(x)dx=1$$ $$\int_{0}^{L}xp(x)dx=1$$ $$p(x) \geq 0 \;\; \forall \; x$$

The constraints simply say that $p(x)$ is a continuous 1D probability distribution on $[0,1]$ with mean equal to 1. Also, $q(x)$ is a known function that depends only on $x$, and $L$ is a known finite number with $L > 2$.

What are the ways of solving this optimization problem? Any methods, analytical or numerical are of interest.

EDIT: In the original question I had accidentally set $L=1$, which leads to a trivial solution

2

There are 2 best solutions below

1
On

Ok your set of constraints implies that your density function must be a Dirac on 1, cause the max of your random variable is 1, so if its mean is equal to its max it is almost surely equal to 1.

So I think there might be an error in your question, otherwise if we do not care about the constraint on the mean, the idea is you maximize the weight where q is maximized and vice versa, so a dirac in x=argmax(p, x in [0,1]) is the answer (if there is a lots of maximums then you can distribute your Diracs as you want if there is continuum also but it's tedious to write cause you have to normalize with some indicator functions.)

1
On

1° Discretize your interval [0, L] in N L/N-sized subintervals

2° adapt your constraints

3° Use Lagrangian to solve your N variables

4° Make N -> +oo