I am facing a physics problem, and I need to demonstrate a result. To get rid of multiple constants and unnecessary variables, I decided to write it in a simpler form.
Let the constants $ b>273,\ 0<p_1<1, \ 0<p_2<1 $ with $ p_1 + p_2 = 1 $ and $ x_2> x_1> 1 $
Let $ f $ and $ g $ be two functions defined on $] 1; 125 [$ by:
$ f (x) = x ^ 4. \exp (\frac{-1414}{x + b}) $
$ g (x) = x ^ {1.9}. \exp (\frac{-1414}{x + b}) $
$ x_ {r_1} $ is the root of the equation $ f (x) -p_1.f (x_1) -p_2.f (x_2) = 0 $
$ x_ {r_2} $ is the root of the equation $ g (x) -p_1.g (x_1) -p_2.g (x_2) = 0 $
I want to show that { $\forall (x_1,x_2)\in]1;125[^2 $ , $x_{r_1}>x_{r_2}$ }
Is it possible to demonstrate this? If yes, according to which theorem / method?
Thank you in advance for your answers.
EDIT1 : the solution must be true for all $x_1\in]1;125[$ and $x_2\in]1;125[$ as well as for all $p\in]0;1[$
EDIT2 :
As for this problem, I encountred it while working on Sedyakin principle ( reability ingeneering).
In fact, on FIDES guide, to calculate the reability of an electronic componant, we need to know the stress applied to that componant and then calculate the acceleration related to that stress. the guide evokes 6 different types of stress, and I was working on the one related to thermal cycling.
To calculate the thermal cycling stress those two formulas are used for a given phase of the life cyle of the componant :
$\Pi_{TcySolder Joints}(\Delta T_{cycling})=(\frac{12.N_{annual-cy}}{t_{annual}}).(\frac{\Delta T_{cycling}}{20})^{1.9}.\exp{[1414.(\frac{1}{313}-\frac{1}{\Delta T_{cycling}+T_{amb}+273})]}$ $\Pi_{TcyCase}(\Delta T_{cycling})=(\frac{12.N_{annual-cy}}{t_{annual}}).(\frac{\Delta T_{cycling}}{20})^{4}.\exp{[1414.(\frac{1}{313}-\frac{1}{\Delta T_{cycling}+T_{amb}+273})]}$
where :
$t_{annual}$ : Time associated with each phase over a year (hours)
$T_{ambient}$ : Average temperature during a phase (°C)
$\Delta T_{cycling}$ : Amplitude of variation associated with a cycling phase (°C) (in this case is the $x$)
$N_{annual-cy}$ : Number of cycles associated with each cycling phase during a year (cycles)
And the value 1414 is an empirical constant that came form norris-landzberg low for Acceleration factor calculations
$1414=\frac{E_a}{k}$
$E_a$ is the activation energy
$k$ is Boltzmann’s constant
Let me try. Consider the equations $$ f(x) = (1-p)f(x_1) + pf(x_2) \\ g(x) = (1-p)g(x_1) + pg(x_2) $$ with $p\in (0,1)$. However if the solutions are continuous with respect to $p$ (which I do not see should not be the case) then obviously $x_{r_1}(0) = x_{r_2}(0) = x_1$ and $x_{r_1}(1) = x_{r_2}(1) = x_2>x_1>1$. Note that $f(x)$ and $g(x)$ are monotonically increasing functions as is $\frac{f(x)}{g(x)} = x^{2.1}$. The RHS is linearly increasing with $p$ and therefore any solution $x_r(p)$ is monotonically increasing and thus $x_r(p) \in (x_1,x_2)$.
Rewrite the first equation as $$ g(x) = (1-p)\left(\frac{x_1}{x}\right)^{2.1} g(x_1) + p \left(\frac{x_2}{x}\right)^{2.1} g(x_2) $$ . Now let $x_{r_1} \in (x_1,x_2)$ be any solution of this equation to some $p$, then $$ (1-p)\left(\frac{x_1}{x_{r_1}}\right)^{2.1} g(x_1) + p \left(\frac{x_2}{x_{r_1}}\right)^{2.1} g(x_2) > (1-p) g(x_1) + p g(x_2) $$ and because $g(x)$ is monotonically increasing $x_{r_1} > x_{r_2}$.
The inequality can be rewritten as $$ p>\frac{1}{1+\frac{g(x_2)}{g(x_1)} \frac{\left(\frac{x_2}{x_{r_1}}\right)^{2.1}-1}{1-\left(\frac{x_1}{x_{r_1}}\right)^{2.1}}} $$ and the RHS is monotonically increasing with $x_{r_1}$. Note that if $x_{r_1}$ is a solution to $$ f(x) = (1-p)f(x_1) + pf(x_2) $$ then this solution will be always smaller (and thus the RHS) than the solution $x_{r_{1e}}$ of the equation $$ x^{\nu} \, x_{1,2}^{4-\nu} \, e^{\frac{-1414}{x_1+b}} = (1-p)f(x_1) + pf(x_2) $$ where $\nu \leq 4$ for $x_1$ and $\nu\geq 4$ for $x_2$. Thus we need to show $$ 1>\frac{1/p}{1+\frac{g(x_2)}{g(x_1)} \frac{\left(\frac{x_2}{x_{r_{1e}}}\right)^{2.1}-1}{1-\left(\frac{x_1}{x_{r_{1e}}}\right)^{2.1}}} $$
Choosing $\nu=4$ will get the statement for all $p<p_0$ for some $p_0$.
I have no time anymore, but I will continue if something better crosses my mind.
PS: Actually forget the last part. It probably leads nowhere. However - I haven't tried it out yet, because it is calculationally intensive - I think the way to go is proof that $$ \frac{{\rm d^2}}{{\rm d}p^2} \left\{ (1-p)\left(\frac{x_1}{x_{r_1}(p)}\right)^{2.1} g(x_1) + p \left(\frac{x_2}{x_{r_1}(p)}\right)^{2.1} g(x_2) \right\} < 0 $$ for all $p \in (0,1)$ by implicit differentiation with the function $$ F(x,p) = f(x) - (1-p)f(x_1) - pf(x_2) = 0 $$ , because then it is concave and lies above the line $$ (1-p) g(x_1) + p g(x_2) $$ with coinciding endpoints, which is all you need.
PPS: When doing the calculation (which I did with maple) what you end up with is $$ f \left( x \right) \left( {\frac {{\rm d}^{2}}{{\rm d}{x}^{2}}}f \left( x \right) \right) x-2\, \left( {\frac {\rm d}{{\rm d}x}}f \left( x \right) \right) ^{2}x+f \left( x \right) \left( \epsilon+1 \right) {\frac {\rm d}{{\rm d}x}}f \left( x \right) $$ where $\epsilon=2.1>0$ is the exponent-difference, and thus it remains to show that $$ x\left\{\frac{1}{2} \, \frac{{\rm d}^2}{{\rm d}x^2} f(x)^2 - 3 \left(\frac{{\rm d}f(x)}{{\rm d}x}\right)^2 \right\} + \frac{\epsilon+1}{2} \, \frac{{\rm d}}{{\rm d}x} f(x)^2 < 0 $$
Interestingly the inequality does not depend on $x_1$ and $x_2$ anymore!
Finally the last inequality evaluated reads $$ k \left( \epsilon-k \right) {x}^{4}+ \left( 4\,b\epsilon\,k-4\,b{k}^{2 }+a\epsilon-2\,ak-a \right) {x}^{3}+ \left( 6\,{b}^{2}\epsilon\,k-6\,{ b}^{2}{k}^{2}+2\,ab\epsilon-4\,abk-{a}^{2} \right) {x}^{2}+{b}^{2} \left( 4\,b\epsilon\,k-4\,b{k}^{2}+a\epsilon-2\,ak+a \right) x+{b}^{4 }k \left( \epsilon-k \right) < 0 $$ where $k=4>\epsilon=2.1$ is the power of the prefactor in $f(x)$ and $a=1414>0$. Obviously all coefficients are negative.
$\epsilon=0$ seems to be valid too, but that is only because I divided out overall common factors $>0$, in particular a common $\epsilon$. $\quad\square$
One interesting remark: There is one special case in which $k<1$, $\epsilon$ sufficiently small and $a$ large enough, such that the linear term can become positive and thus the LHS for sufficiently small $x_A<x<x_B$.