Minimise $|f''(x)|$ on an interval when you know the values of the function and the values of the derivative at the endpoints of the interval only.

197 Views Asked by At

Given $a, b,c \in \mathbb{R}$, let $A_{abc}$ be the set of twice-differentiable real functions $f:[0,1] \to \mathbb{R}$ such that $f(0) = 0$, $f(1) = a, f'(0) = b$ and $f'(1) = c.\ $ Find the infimum of {$\max_{x \in [0,1]}$$|f''(x)|: f \in A_{abc}$ }$\subset \mathbb{R}$ in terms of $a, b$ and $c$.

I'm not sure I've written that correctly: I'm sure I'll have to re-write that more clearly. Anyway, the title should be somewhat clear as to what I'm after.

Example 1: enter image description here $$$$ Example 2: enter image description here

I think we should also add the constraint that $ f'(0) \geq 0$, because if we then reflect everything the functions in the x-axis, we get all the cases where $f'(0) < 0$, so we don't lose any generality by assuming $ f'(0) \geq 0$.

Anyway, I'd be very surprised if this wasn't some sort of duplicate as it's such a natural optimisation question: I just think I'm not using the right lingo.

3

There are 3 best solutions below

8
On

This answer is preliminary, but I’ll try to update it.

Given a twice-differentiable function $f$ from $[0,1]$ to $\Bbb R$ put $\|f\|=\sup_{x \in [0,1]} |f''(x)|$. If $f\in A_{abc}$ is any function then a function $g(x)=f(x)-ax$ belongs to $A_{0, b-a, c-a}$ and $\|g\|=\|f\|$, so it suffices to consider the case when $a=0$.

Given $b$ and $c$, denote the required infimum by $I$.

Let $f\in A_{0bc}$ be any function. By Rolle’s theorem, there exists $y\in (0,1)$ such that $f’(y)=0$. By Lagrange’s theorem, there exist $x_1, x_2\in (0,1)$ such that $f’’(x_1)=\frac{f’(y)-f’(0)}{y-0}=-\frac by$ and $f’’(x_2)=\frac{f’(1)-f’(y)}{1-y}=\frac c{1-y}$. This easily follows that $$I\ge \max\{|f’’(x_1)|, |f’’(x_2)|\}\ge |b|+|c|.$$

On the other hand, a polynomial $f(x)=(b+c)x^3-(2b+c)x^2+bx$ belongs to $A_{0bc}$ and $$I\le \|f\|= 2\max\{|2b+c|,|b+2c|\}.$$

1
On

Not a complete answer, but some lower bounds. As Alex already pointed out, it suffices to consider the case $a=0$. I will also denote the sought infimum by $I$.

From Taylor's formula at $x=0$ we have $$ 0 = f(1) = f(0) + f'(0) + \frac 12 f''(\xi) = b + \frac 12 f''(\xi) $$ for some $\xi \in (0, 1)$ and therefore $I \ge 2|b|$. Similarly, Taylor's formula at $x=1$ gives $I \ge 2|c|$, so that $$ \boxed{I \ge 2 \max(|b|, |c|) \, .} $$

If $b$ and $c$ have the same sign then a better bound is possible. Without loss of generality assume that $b, c > 0$. With $M = \max |f''(x)| $ we have, again using Taylor's formula $$ bx - \frac M2 x^2 \le f(x) \le c(x-1) + \frac M2 (x-1)^2 $$ which gives $$ M \ge 2 \frac{bx + c(1-x)}{x^2 + (1-x)^2} $$ for all $x \in (0, 1)$. Setting $x = b/(b+c)$ gives $M \ge 2(b+c)$, so that $$ \boxed{I \ge 2|b+c| \text{ if } bc > 0 \, .} $$

0
On

In this answer I show that the exact answer is $sign((b-a)(c-a))(b+c-2a)+\sqrt{2((b-a)^2+(c-a)^2)}$ (formula (5) below).

As noted in AlexRavsky's and MartinR's answer, we may assume without loss that $a=0$, so that $f\in B_{b,c}=A_{0,b,c}$.

Next, if $f\in B_{b,c}$ with $b\neq 0$, then $g(x)=\frac{f(x)}{b}$ satisfies $g\in B_{1,\frac{c}{b}}$. So, (assuming $b\neq 0$, which we do since this limit case will turn out to be similar and simpler than our generic case) we may assume without loss that $b=1$, so that $f\in C_{c}=A_{0,1,c}$.

Further, if $f \in C_c$, then $h(x)=-\frac{f(1-x)}{c}$ satisfies $h \in C_{\frac{1}{c}}$. So we may assume without loss that $|c| \geq 1$.

We assume $|c| \gt 1$ (the limit case $|c|=1$ will turn out to be similar and simpler than our generic case). Let $\varepsilon =\frac{c}{|c|}$ be the sign of $c$.

Let $m=||f''||_{\infty}$, let $z\in (0,1)$ be a constant to be fixed later, and let

$$ I_1 = \int_0^z (m+\varepsilon f''(t))(z-t)dt, \ I_2 = \int_z^1 (m-\varepsilon f''(t))(t-z)dt \tag{1} $$

Both $I_1$ and $I_2$ can be computed by two successive integration by parts, and we find

$$ I_1 = m\frac{z^2}{2} - \varepsilon z + \varepsilon f(z), \ I_2 = m\frac{(1-z)^2}{2}+\varepsilon c(z-1)- \varepsilon f(z)\tag{2} $$

Now both $I_1$ and $I_2$ are nonnegative by construction, so $I_1+I_2\geq 0$. This means that $m\geq m_0$ where

$$ m_0=\frac{2\varepsilon (c+z-cz)}{z^2+(1-z)^2} \tag{3} $$

The inequality $m\geq m_0$ will be an equality iff the integrands in $I_1$ and $I_2$ are zero a.e., so that $f''$ is $- \varepsilon m$ on $[0,z]$ and $+ \varepsilon m$ on $[z,1]$. Integrating twice from $0$, it follows that $f(x)=-\varepsilon m\frac{x^2}{2}+x$ for $x\in [0,z]$ and $f(x)=\varepsilon m\frac{x^2}{2}+(1-2\varepsilon mz)x+\varepsilon mz^2$ for $x\in [z,1]$. The two conditions $f'(1)=c$ and $f(1)=0$ give us a (nonlinear) system of two equations in $m$ and $z$, and it turns out that this system has the unique solution

$$ m=\varepsilon (c+1)+\sqrt{2(c^2+1)}, z = \frac{c-\varepsilon \sqrt{\frac{c^2+1}{2}}}{c-1}\tag{4} $$

Returning to the general case and unrolling the symmetries of the initial problem, we find the fully general bound for $f\in A_{a,b,c}$ :

$$ m=sign((b-a)(c-a))(b+c-2a)+\sqrt{2((b-a)^2+(c-a)^2)} \tag{5} $$

Note 1. To make the formula work when of $b-a,c-a$ is zero and the other is not, we must use the convention $sign(0)=+1$.

Note 2. The unique solution we found does not strictly satisfy the requirements in the OP (the second derivative is discontinuous at $z$), but this is not a problem ; by well known density results, the non-$C^2$ optimal solution can be approximated by $C^2$ solutions and the infimum stays the same (although it is never attained by $C^2$ solutions).