Given $p \in \mathbb R^N$ and $X$ a symmetric matrix, find $\varphi \in C^2$ such that $D\varphi = p, D^2\varphi = X$ at a given point

131 Views Asked by At

In their paper User's guide to viscosity solutions of second order differential partial equations, Crandall, Ishii and Lions define the superjet of an upper semicontinuous function $u$ at $\hat x \in \mathcal O$, $\mathcal O$ a locally compact subset of $\mathbb R^N$, to be the set of $(p, X) \in \mathbb R^N \times \mathcal S_N$ ($\mathcal S_N$ the set of symmetric matrices) such that \begin{equation} u(x) \leq u(\hat x) + \langle p, x - \hat x \rangle + \frac 12 \langle X(x - \hat x), x - \hat x \rangle + o(|x - \hat x|^2) \quad \quad (*) \end{equation} holds. The superjet is denoted $J_{\mathcal O}^{2, +}u(\hat x)$. It is left as an exercise that $$ J_{\mathcal O}^{2, +}u(\hat x) = \{(D\varphi(\hat x), D^2 \varphi(\hat x)) \ : \ \varphi \in C^2 \text{ and } u - \varphi \text{ has a local maximum at } \hat x\}. $$

My question is: given $(p, X) \in \mathbb R^N \times \mathcal S_N$ such that $(*)$ holds, how to find such a $\varphi$?

Thanks in advance.

3

There are 3 best solutions below

3
On BEST ANSWER

This is proved in Proposition 12.11 in the Calder's Viscosity Solutions notes: http://www-users.math.umn.edu/~jwcalder/viscosity_solutions.pdf

The proof is similar to the one given by your professor, copied in another answer. If there is some step in the proof that is not clear, I can elaborate on it in this answer to explain it more clearly.

EDIT: My guess is your confusion is near the end of the proof, where one verifies that the constructed test function has the correct derivatives. I'll add some details here, and I'll follow the proof in Calder's \ notes above.

At the end of the proof we've shown that $$u(x) \leq p \cdot x + \frac{1}{2}x^T X x + \sigma(3|x|)=:\varphi(x)$$ in a neighborhood of the origin (we've taken $\hat{x}=0$). Earlier in the proof we defined $$\sigma(r) = \int_0^r \int_0^s \rho(t)\, dt\, ds,$$ where $\rho$ is continuous, nondecreasing, and $\rho(0)=0$. From this, we immediately have $$(*) \ \ \ \sigma(r) \leq \frac{1}{2}r^2\rho(r),$$ $$(**) \ \ \ \sigma'(r) = \int_0^r \rho(t) \, dt \leq r\rho(r),$$ and $$(***) \ \ \ \sigma''(r) \leq \rho(r).$$ In particular, $\sigma$ is $C^2$ on $[0,\infty)$ and $$\sigma(0)=\sigma'(0)=\sigma''(0)=0.$$

I assume what is not clear is to show that $D \varphi(0)= p$ and $D^2\varphi(0)=X$, as is claimed (without details) in the notes. To prove this, we just need to consider the perturbative term $$\psi(x) := \sigma(3|x|),$$ and show that its gradient and Hessian vanish at $x=0$. By $(*)$ we have $$|\psi(x)| \leq \frac{9}{2}\rho(3|x|)|x|^2.$$ It follows by writing the difference quotient definition of the derivative that $D\psi(0)=0$. We also compute $$D\psi(x) = 3\sigma'(3|x|)\frac{x}{|x|} \ \ \text{for }x\neq 0.$$ By $(**)$ we see that $D\psi(x)$ is continuous at $x=0$. Similarly, by $(**)$ we see that $$|D\psi(x)| \leq 9\rho(3|x|)|x|.$$ Again using the difference quotient definitions of the derivative we have $D^2\psi(0)=0$, since $\rho$ is continuous and vanishes at the origin. We also compute $$D^2\psi(x) = 3\sigma'(3|x|)\frac{I}{|x|} + 3\left(3\sigma''(3|x|) - \frac{\sigma'(3|x|)}{|x|}\right) \frac{xx^T}{|x|^2}$$ for $x\neq 0$. As before, by $(**)$ and $(***)$ we find that $D^2\psi(x)\to 0$ as $x\to 0$, so we have continuity at zero. This verifies that $\psi$ is indeed $C^2$ and has vanishing gradient and Hessian.

I hope this helps clear things up. Let me know if you have other questions.

7
On

Given a $(p, X)$ in this set, we can construct a $\phi$ as follows, by using these as data to construct $\phi$ with a specified first/second derivative. Some extra work has to be done to ensure this local maxima property. First, set $$ \tilde\phi(x) = u(\hat x) + \langle p, x - \hat x\rangle + \frac{1}{2}\langle X(x - \hat x), x - \hat x\rangle. $$ From the definition of $\tilde\phi$ we see that $\tilde\phi$ is $C^2$ with $D\tilde\phi(\hat x) = p$ and $D^2 \tilde\phi(\hat x) = X$. By the fact that $(p, X)$ was in the superjet, we have $$ u(x) \leq \tilde\phi(x) + o(|x - \hat x|^2). $$ This isn't quite what we want, since we want $u - \tilde\phi$ to have a local max at $\hat x$, i.e $u - \tilde \phi \leq 0$ (since $u(\hat x) = \tilde\phi(\hat x)$). To fix this, we use the definition of little-o notation. There exists a $\delta > 0$ such that for $|x- \hat x| < \delta$, $\frac{1}{|x - \hat x|^2}o(|x - \hat x|^2) \leq 1$, hence the little-o error term is $\leq |x - \hat x|^2$ on $B(\hat x, \delta)$. Define now $$ \phi(x) = \tilde\phi(x) + 2|x - \hat x|^2. $$ Then $\phi$ has $D\phi(\hat x) = D\tilde\phi(\hat x)$, $u(\hat x) = \phi(\hat x)$, and for all $x \in B(\hat x, \delta)$, we have $$ u(x) - \phi(x) = u(x) - \tilde\phi(x) - 2|x - \hat x|^2 \leq |x - \hat x|^2 - 2|x - \hat x|^2 = - |x - \hat x|^2 \leq 0. $$ Thus $u - \phi$ has a local maximum at $x = \hat x$. Note also that $$ D^2\phi(\hat x) = D^2\tilde\phi(\hat x) + 2I $$ and thus $$ u(x) \leq u(\hat x) + \langle D\phi(\hat x), (x - \hat x)\rangle + \langle D^2\phi(\hat x)(x - \hat x), x - \hat x \rangle + o(|x - \hat x|^2), $$ since $D^2\phi \geq D^2 \tilde \phi$ (the identity matrix is positive definite). Thus $(D\phi(\hat x), D^2\phi(\hat x))$ is in the superjet.

0
On

Let $u \in USC(\Omega)$ and $x_0 \in \Omega$. If $(p, X) \in J^{2, +}u(x_0)$ then there exists $\varphi$ of class $C^2$ supertangent to $u$ at $x_0$ such that $p = D \varphi(x_0)$ and $X = D^2 \varphi(x_0)$.

Proof: Take $(p, X) \in J^{2, +}u(x_0)$. Then, in a neighborhood of $x_0$, $$ u(x) \leq \underbrace{u(x_0) + p \cdot (x - x_0) + \frac 12 \left(X (x - x_0) \right) \cdot (x - x_0)}_{:= P(x)} + o(|x - x_0|^2), $$ so $$ u(x) - P(x) \leq o(|x - x_0|^2) = |x - x_0|^2 o(1). $$

Our mission is then to find a positive function smooth enough, of smaller order than $|x - x_0|^2$, such that the gradient and the Hessian at $x_0$ are null.

For $r \geq 0$ small we define $$ \omega(r) = \max_{|x - x_0| \leq r}(u(x) - P(x)). $$ Since $u - P$ is upper semicontinuous and we are taking maximuns in compact sets $\omega$ is well defined. It is increasing, since the sets where we take the maximum are getting bigger. It is clear that $w(0) = 0$. Moreover, for $r > 0$ and by our previous remark it holds that $$ \frac{\omega(r)}{r^2} = o(1), \quad r \to 0. $$ This implies that $\omega$ is of class $C^2$ up to $r = 0$. Next we define $$ \varphi(r) = \frac{1}{r^2} \int_r^{2r} \int_s^{2s} w(t) \ dt \ ds $$ Since $\omega$ is increasing, we can estimate: \begin{align*} \frac{1}{r^2} \omega(r) \frac 32 r^2 & \geq \frac{1}{r^2} \int_r^{2r} \omega(s) \ ds \\ & \leq \varphi(r) \\ & \leq \frac{1}{r^2} \int_r^{2r}\omega(2s) s \ ds \\ & \leq \frac{1}{r^2} \omega(4r) \frac 32 r^2, \end{align*} from which we conclude that $\varphi(r) = o(r^2)$ as $r \to 0$. Then $$ u(x) - P(x) \leq \omega(|x - x_0|) \leq \frac 23 \varphi(|x - x_0|). $$ We conclude that $$ P(x) + \varphi(|x - x_0|) $$ is a supertangent which has the desired gradient and Hessian. Indeed, the derivatives of $\varphi$ decrease as the square of the argument. The proof is complete.