Help finding a $C^{1}$ function, with given $C^{1}$ functions, a relation, and some additional assumptions.

96 Views Asked by At

I'm down to the following problem (see below) that I just need some insight on (I couldn't find anything close to help online, via other posts here, etc.). My initial thoughts were to use the Inverse Function and/or the Implicit Function Theorem(s), eventually, to prove the problem-statement below. The question is expressed exactly as it is given, and I'll additionally give versions of both the Inverse/Implicit Function Theorems I was given. Note that I'm finishing up the end of the second sequence of an undergraduate, Advanced Calculus course, where the theorems below are both located in the class-text we've been working out of all year -- which is that of C. Pugh's, Real Mathematical Analysis (PGS. 286-289 in the 2nd Ed., soft-cover book); This is a great book, in my opinion, btw...others are better, but an excellent read with tons of interesting problems -- some even the author doesn't have an answer to. Lastly, and for the record, the problem stated below is not in the Pugh's book.

Problem: Let $f:\mathbb{R}\rightarrow\mathbb{R}$, and $g:\mathbb{R}\rightarrow\mathbb{R}$ be $C^{1}$ functions on $\mathbb{R}$ with $f(0)=0$ and $f'(0)\neq 0$. Prove that there exists a $\delta>0$ and a $C^{1}$ function $\varphi:(-\delta,\delta)\rightarrow\mathbb{R}$ such that $\varphi(0)=0$ and $\cos(x)\cdot f\big(\varphi(x)\big) = \sin(x)\cdot g\big(\varphi(x)\big)$, for all $x\in(-\delta,\delta)$.

As a quick remark, maybe it is a typo in the conclusion, but the "exact" problem-statement has $\cos(x)\cdot f\big(\varphi(x)\big)=\sin(x)\cdot g\big(\varphi(x)\big)$, for all $x\in(\delta,\delta)$, instead of $(-\delta,\delta)$, which I took as a typo [?] . Here are the Implicit/Inverse Function Theorems.

Implicit Function Theorem: Let $f:U\rightarrow\mathbb{R}^{m}$ be given, where $U$ is an open subset of $\mathbb{R}^{n}\times\mathbb{R}^{m}$. Fix $(x_{0},y_{0})\in U$ and write $f(x_{0},y_{0})=z_{0}$. If $f$ is $C^{r}$, $1\leq r\leq+\infty$, then near $(x_{0},y_{0})$ the $\boldsymbol{z_{0}}\textbf{-locus}$ of $f$ (i.e., the set of points $(x,y)$ nearby $(x_{0},y_{0})$ at which $f(x,y)=z_{0}$) is the graph of the unique function $y=g(x)$. Besides, $g$ is $C^{r}$.

The book previously states that an assumption used in the proof is that the $m\times m$ matrix $B=\bigg[\dfrac{\partial f_{i}(x_{0},y_{0})}{\partial y_{j}}\bigg]$ is invertible (equivalently, the linear map corresponding to $B$ is an isomorphism of $\mathbb{R}^{m}$ onto to itself).

Inverse Function Theorem: If $m=n$ and $f:U\rightarrow\mathbb{R}^{m}$ is $C^{r}$, $1\leq r\leq+\infty$, and if at some $p\in U$, $(Df)_{p}$ is an isomorphism, then $f$ is a $C^{r}$ diffeomorphism from a neighborhood of $p$ to a neighborhood of $f(p)$.

Note that $U\subset\mathbb{R}^{n}$, $U$ is open in the Inverse Function Theorem above. As for some prerliminary work I performed - which is very little since I moved on with the intention of coming back to this problem - should $\delta>0$ and $\varphi:(-\delta,\delta)\rightarrow\mathbb{R}$ exist (where I'm just saying they do for the moment) with $\delta>k\pi$, for some $k\in\mathbb{Z}$, this gives $f\big(\varphi(k\pi)\big)=0$ so $\varphi(k\pi)=0=\varphi(0)$ is possible since $f(0)=0$. Similarly, $\delta>\dfrac{k\pi}{2}$ so $g\big(\varphi\big(\frac{k\pi}{2}\big)\big)=0$. Then, differentiating coupled with the chain rule gives more results, and then I thought to use the Inverse/Implicit Function Theorems, but I'm not getting anywhere - maybe differentiate $f$ and divide by $f'(0)\neq 0$ somewhere (to set up for using the Inverse Function Theorem)? Does any of this help? Any suggestions, recommendations are EXTREMELY helpful at this point! Sorry also for the lengthy post - I stated the multi-dimensional theorems above mainly for my own reference, for practice, etc. Ergo, I appreciate one's time reading everything. I can be a bit long-winded at times, and my apologies for doing so...my version of "helping myself" has a very fine line between helpfulness and overkill.

1

There are 1 best solutions below

0
On BEST ANSWER

Well, I must say that I think I'm close to being correct now? I eventually realized a way to do this by doing another more complicated problem involving the Implicit Function Theorem. Connecting the ideas in the overall method I used there, I think this works (it might need cleaned up a little - I'm turning the work in tomorrow, and I have to start revising the rest of my work, and I may not get to further corrections).

Suppose that $f:\mathbb{R}\rightarrow\mathbb{R}$ and $g:\mathbb{R}\rightarrow\mathbb{R}$ are of class $C^{1}$. We aim to implicitly define $\varphi$, so fix the point $(x_{0},y_{0})=\big(0,\varphi(0)\big)=(0,0)=0_{\mathbb{R}^{2}}\in\mathbb{R}\times\mathbb{R}=\mathbb{R}^{2}$ and define the function $F:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ such that for $(x,y)\in\mathbb{R}^{2}$ we have $F(x,y)=\big(\cos(x)f(y),\sin(x)g(y)\big)$. We hope to find a solution $(x,\varphi(x))=(x,y)$ near the point $(x_{0},y_{0})=(0,\varphi(0))=(0,0)\in\mathbb{R}^{2}$ of the equation $F(x,y)=(0,0)=0_{\mathbb{R}^{2}}=z_{0}$ with

$F(x,y)=\big(\cos(x)f(y),\sin(x)g(y)\big)=\big(\cos(x)f\big(\varphi(x)\big),\sin(x)g\big(\varphi(x)\big)\big)$.

Note that $F$ is clearly $C^{1}$ since the components in the image of an arbitrary point under $f$ compromise that of $C^{1}$ functions of one-variable, and, indeed, when $x_{0}=0$ we have $\varphi(x_{0})=\varphi(0)=0$ as we've shown above. Also, $F(x_{0},y_{0})=F(0,\varphi(0))=F(0,0)=(0,0)=0_{\mathbb{R}^{2}}$. We now look at the Jacobian determinant (referred to this way because we can also, confusingly, call this object just the Jacobian), which says $\dfrac{\partial(f_{1},f_{2},...,f_{n})}{\partial(x_{1},x_{2},...,x_{n})}=\det\Bigg(\bigg[\dfrac{\partial f_{i}}{\partial x_{j}}\bigg]_{1\leq i,j\leq n}\Bigg)=\det(\mathcal{J}(f))$, where $\mathcal{J}(f)$ is the Jacobian matrix of $f$. Thus we have that

$\dfrac{\partial F(x,y)}{\partial(x,y)}=\begin{vmatrix} -\sin(x)f(y) & \cos(x)g(y) \\ \cos(x)f'(y) & \sin(x)g'(y) \\ \end{vmatrix}=-\sin^{2}(x)f(y)g'(y)-\cos^{2}(x)f'(y)g(y)$.

We thus obtain in particular that $\dfrac{\partial F(0,0)}{\partial(x,y)}=(0)-(1)f'(0)g(0)=-f'(0)g(0)\neq 0$ if, and only if $g(0)\neq 0$. However, we can assume that $g(0)\neq 0$ without loss of generality since $x_{0}=0$ implies $\sin(0)g(y)=0\cdot g(y)=0$ no matter what $y$ is. Therefore, according to the Implicit Function Theorem, the equation $F(x,y)=0_{\mathbb{R}^{2}}$ has a unique solution of the form $(x,y)=\big(x,\phi(x)\big)$ with $\varphi(0)=0$ in a neighborhood $(-\delta,\delta)\times(-\varepsilon,\varepsilon)\subset\mathbb{R}^{2}$, where $\varphi$ is of class $C^{1}$ and $\varphi\big[(-\delta,\delta)\big]\subset(-\varepsilon,\varepsilon)$ such that $(-\varepsilon,\varepsilon)\subset\mathbb{R}$ is a neighborhood of $\varphi(0)=0$. Therefore, for some $\delta>0$ there is a (unique) $C^{1}$ function $\varphi:(-\delta,\delta)\rightarrow\mathbb{R}$ such that $\varphi(0)=0$ and for $(x,y)\in(-\delta,\delta)\times(-\varepsilon,\varepsilon)$ we have $F(x,y)=0_{\mathbb{R}^{2}}$ - i.e., $\cos(x)f(y)=\cos(x)f\big(\varphi(x)\big)=0=\sin(x)g(\varphi(x)\big)=\sin(x)g(y)$.