Optimisation problem with a definite integral whose bounds are min of a set

36 Views Asked by At

What I am trying to do is finding an optimal $b_2$ for my objective function. The objective function is an expected utility from two outcomes. I denote the probability of outcome 1 as $P$ and that of outcome 2 as $1 - P$. And this probability is from a multidimensional uniform distribution. It is a marginal CDF. But for simplicity, I set that only a variable of one dimension is deviating and the rest is parameter. And other assumption is that all varibles are in the interval $(0, 1]$.

The problem is that bounds are the function of $b_2$.

The probability term is $$ P(b_2) = \int\limits_{0}^{min\{1; x(b_2)\}} min\{1; y(b_2, h)\}f(h)dh. $$ $x$ and $y$ are quadratic functions of $b_2$. And I know for which value of $b_2$ bounds are bigger than 1. For example, when $0.5 < b_2 < 0.7$, $x$ is strictly smaller than 1.

So what I am thinking is that

  1. Set 4 cases of this probability based on whether a bound is bigger than 1 or not.
  2. Get the probability maximising $b_2$ for each case.
  3. See if $b_2$ is smaller than 1 and also fits in the range of $b_2$ like I mentioned above for an example.
  4. When 3. is satisfied keep $b_2$ as an optimal value.
  5. If not, abandon $b_2$ and the case where $b_2$ is derived from.

Is this is a correct step?

1

There are 1 best solutions below

2
On

I’m not sure I understand your approach, possibly due to typos and/or a language barrier – for instance, I don’t know what you mean by “Set $4$ cases of this probability”. One thing that definitely seems wrong in your description is “See if $b_2$ is smaller than $1$”, which should probably be “$x(b_2)\lt1$”; I also suspect that where you say “maximizing $b_2$” you want to be maximizing $P(b_2)$.

In any case, the standard approach to this sort of problem would be to optimize separately for the two cases of the minimum, i.e. optimize $P(b_2)$ with respect to unrestricted $b_2$ with the upper limit of the integral taken as $1$, and the same with the upper limit taken as $x(b_2)$. If you find any optima, you can check for self-consistency, i.e., in the first case whether $x(b_2)\ge1$ and in the second case whether $x(b_2)\le1$. You then need to compare any self-consistent optima you find with the values on the boundary, $x(b_2)=1$, since $P(b_2)$ isn’t differentiable there so you wouldn’t find those by setting the derivative to zero.