An optimization is to be carried out over the following function:
$$f(x_1,x_2;a,b)=\frac{1}{2}\left[x_1+x_2-x_1x_2+c(x_1;a,b)c(x_2;a,b)\right]$$ where $$c(x;a,b)=\sum_{i=0}^n\left[\left(\sum_{j=0}^i\frac{b_{j-1}-b_j}{a_{j-1}}\right)x+b_i\right]\mathbb{1}_{\left[a_i,a_{i-1}\right]}(x)$$ with the initial conditions $a_{-1}=1$, $a_{n}=0$ and $b_{-1}=0$.
Here, $a$ and $b$ are the parameter vectors given as $$a=\{a_0,\ldots,a_n\},\quad b=\{b_0,\ldots,b_n\}$$ with $$0=a_{n}<a_{n-1}<\cdots < a_0 <a_{-1}=1$$ $$0=b_{-1}<b_{0}<\cdots<b_{n-1}<b_{n}=1$$
The problem to be solved is given as:
$$\lim_{n\rightarrow\infty}\max_{a,b}\left[\min_{{x_1=x_2\in[0,1]}}f(x_1,x_2;a,b) -\min_{{x_1,x_2\in[0,1]}}f(x_1,x_2;a,b)\right]$$
My work: I decided to consider a finite $n$ and then try to generalize to some $n$ which is very large, not necessarily $n\rightarrow\infty$, as it seems impossible to me.
Assume that $n=1$. Then,
$$c(x;a,b)=\cases{\frac{-1+b_0(1-a_0)}{a_0}x+ 1,\quad 0 \leq x \leq a_0\\ b_0(-x+1),\quad a_0 \leq x \leq 1}$$
Accordingly it is easy to obtain
$$ f(x_1,x_2;a,b)=\\\small\cases{\frac{1}{2}\left[x_1+x_2-x_1x_2+\left(\frac{-1+b_0(1-a_0)}{a_0}x_1+1\right)\left(\frac{-1+b_0(1-a_0)}{a_0}x_2+1\right)\right],\quad 0 \leq x_1\leq a_0,\, 0 \leq x_2\leq a_0\\ \frac{1}{2}\left[x_1+x_2-x_1x_2+\left(\frac{-1+b_0(1-a_0)}{a_0}x_1+1\right)b_0(-x_2+1)\right],\quad 0 \leq x_1\leq a_0,\, a_0 \leq x_2\leq 1\\ \frac{1}{2}\left[x_1+x_2-x_1x_2+b_0(-x_1+1)\left(\frac{-1+b_0(1-a_0)}{a_0}x_2+1\right)\right],\quad a_0 \leq x_1\leq 1,\, 0 \leq x_2\leq a_0\\ \frac{1}{2}\left[x_1+x_2-x_1x_2+b_0(-x_1+1)b_0(-x_2+1)\right],\quad a_0 \leq x_1\leq 1,\, a_0 \leq x_2\leq 1}$$
With the constaint of $x_1=x_2$, I have
$$f(x_1=x_2;a,b)=\cases{\frac{1}{2}\left[2x_1-{x_1}^2+\left(1+\frac{x_1(1+b_0(1-a_0))}{a_0}\right)^2\right],\quad 0 \leq x_1 \leq a_0\\\frac{1}{2}\left[2x_1+b_0^2(1-x_1)^2-x_1^2\right],\quad a_0 \leq x_1 \leq 1}$$
From here, to minimize $f(x_1,x_2;a,b)$, I take the the derivative of this function with respect to $x_1$ and $x_2$ and make it equal to zero. I do the same for $f(x_1=x_2;a,b)$. For $f(x_1,x_2;a,b)$ taking derivatives corresponds to minimization only for the case $0 \leq x_1\leq a_0, 0 \leq x_2\leq a_0$. For other three cases it just maximizes. Then I check to see what kind of function I am dealing with in 3D and I see that I have an increasing function. Therefore I select the minimum values of $x_1$ and $x_2$. Later I compare the first case with the other three cases via taking the difference and plotting it in 3D. I see that the first case takes less values than all other cases for all points $x_1,x_2$.
I do the same for the function $f(x_1=x_2;a,b)$ and I also see that the first case is again the minimizer.
After that I take the difference for the first case of both: $$f(x_1=x_2;a,b)-f(x_1,x_2;a,b)$$ This function is only a function of the parameters $a_0,b_0$, because I took the derivates and make them equal to zero, and insert them back in the original equations.
In the next step I must find the maximum over all $(a_0,b_0)\in[0,1]$. This step gives me $0$.
For $n\geq 2$, the resul is no more $0$. But I dont know how to proceed? I have problems, because as $n$ increases there are too many cases to compare. Taking derivatives and making them equal to zero gives me maximum or minimum? Lets assume I found them, later I cannot make visual inspection. Which cases should I consider? How to approach this problem? for example I can also use Matlab or Mathematica.
I will be happy to hear your comments. I can do the work. Solutions for $n\in\{2,\ldots,3\}$ either analytically or with a program is enough.