Derivative and inverse

1.1k Views Asked by At

Is there a function whos derivative of its inverse equals the inverse of its derivative? This may also hold on a specific interval only. $$\frac{d}{dx}f^{\langle-1\rangle}(x)=\left(\frac{d}{dx}f(x)\right)^{\langle-1\rangle}$$ Considering that $f(x)=x$ is its own inverse and $e^x$ is its own derivative, neither of them satisfies the equation, but both are the only functions with the respective properties (over the reels). Is this already a proof that there is no solution to the above equation?

Also, I dont know much about complex analysis but if a complex-valued function meets the criteria I'd be happy to know too :)

3

There are 3 best solutions below

0
On

Edits: I leave here the preliminary lines of thoughts/failed attempts. The answer based on monotonicity of the functions is probably correct and one may try to give a more abstract justification, cf. part 2.

  • To begin with it is clear that the OP denotes $f^{\langle-1\rangle}$ the "reciprocal" function and not the inverse as in $\frac{1}{f}$. Hence if ever one wants to formulate the problem in algebraic terms, we need to introduce an algebra $\mathcal{A}$ of functions with two products: $\times$ the pointwise product and $\circ$ the composition. The notion of derivation is readily generalized as a linear map $D:\mathcal{A} \to \mathcal{A}$ which satisfies the Leibniz rule w.r.t. $\times$. One can define the group $G$ of $\circ$-invertible elements and study already its image by $D$, or stable subsets etc...
  • The immediate relation we get for $\mathcal{A}:= \mathcal{C}^{\infty}(\mathbb{R})$ are (to avoid confusion $g$ is the inverse of $f$ for $\circ$, while $h$ is the inverse of $f'$ for that same law, Id unit for $\circ$ while the constant $1$ function is the unit fo $\times$) $$ f\circ g=\mathrm{Id}_{\mathbb{R}}\quad \Longrightarrow\quad g' \times f'\circ g = 1\quad \label{1}\tag{1}$$ $$ f'\circ h=\mathrm{Id}_{\mathbb{R}}="x"\quad \Longrightarrow\quad \frac{1}{x} \times f'\circ h = 1 \quad \label{2}\tag{2}$$ with these notations, OP's equation reads $g'\overset{!}{=} h$ and by (\ref{1}): $\quad g' = \frac{1}{f'\circ g}=h$. (I assume the $\circ$-inverse are left and right inverse, as in the usual definition). What Ninad Munshi wrote on his first comment is the composition on the right of this equality with $f'$: $\quad \frac{1}{f'\circ g \circ f} = h\circ f =\mathrm{Id}_{\mathbb{R}}$. I did play with similar equalities but to no avail unfortunately... (@Christian by the way your "Feb 18" comment is correct, but for clarity you should for example use the variable $x$ for the domain and $y$ for the target space...)
  • Let us examine more abstractly the behavior of $\circ$ and of $f\mapsto f^{\langle -1\rangle}$ w.r.t. $+$ and $\times$. For the derivation of a composition there is that Faà di Bruno formula for derivation of composition but I was thinking more about a characterize the set $\mathcal{S}$ of solutions of the equation: does it define a subgroup/subvector space/ideal of $\mathcal{A}$. Unfortunately no, although (if not empty) $\mathcal{S}$ is stable under taking the inverse which we may denote $\mathrm{Inv}$: indeed if $f\in \mathcal{S}$ then so is $ g:=\mathrm{Inv} f$ $$\frac{d}{dx}\ g = \mathrm{Inv} \circ \frac{d}{dx}\circ\mathrm{Inv}\ g \quad\Longrightarrow\quad \mathrm{Inv} \circ \frac{d}{dx}\ g = \mathrm{Inv} \circ \mathrm{Inv} \circ \frac{d}{dx}\circ\mathrm{Inv}\ g = \frac{d}{dx}\circ\mathrm{Inv}\ g $$ The equation can in fact be understood as commutation of the derivation operator $\frac{d}{dx}$ and that of taking the inverse $$\frac{d}{dx}\circ \mathrm{Inv}\ f = \mathrm{Inv} \circ \frac{d}{dx}\ f \label{Eq}\tag{Eq}$$ The contradiction I had hoped to get if $(\mathcal{S},\circ)$ were a group, was to exhibit a stable subvector space, and show that a group for $\circ$ could not contain a vector space, but $\mathcal{S}$ is not stable under composition!!! To do so, I assumed $\mathcal{A}:=\mathcal{C}^{\infty}_c(\mathbb{R}) \hookrightarrow H^1(\mathbb{R})$ (Sobolev space, or any space on which one could "diagonalize" $\frac{d}{dx}$, by Fourier transform usually). As a consequence of (\ref{Eq}), $\mathrm{Inv}$ should preserve the eigenspaces of $\frac{d}{dx}$. If we really want to continue along this line, I think we have to define a new kind of algebraic structure...
  • "Reciprocal" function also appear when one does a change of variable in integration, so one could also get an integral version of the equation $g'=h$ in the case $\mathcal{A}:= \mathcal{C}^{\infty}(\mathbb{R})$ $$\begin{split} g(y)-g(0) &=\int_0^y g'(w)\, dw = \int_0^y h(w)\, dw =\genfrac{[}{]}{0pt}{0}{z:=h(w)}{dz = h'(w)\, dw = \frac{1}{f''\circ h(w)}\, dw}\\ &= \int_0^{h(y)} f''(z)\, z\, dz = \left[z\, f'(z) \right]^{h(y)}_0 - \int_0^{h(y)} f'(z)\, dz\\ &= h(y)\times y -\big(f\circ h(y) - f(0) \big) \end{split}$$
  • Could there be power series solution? From the comments, the attempt with $f(x):= a x^{b} $ involved functions of the form $\sqrt[b]{x}$ and $\sqrt[b-1]{x}$ so maybe instead of looking at power series, one may look at sums of functions of this form $\sqrt[n]{x}$.
  • Can't we adapt some fixed point techniques? (very explicitly, start with some function $f_0,\ \Delta(f_0):=\frac{d}{dx}f_0^{\langle-1\rangle} - \left(\frac{d}{dx}f_0\right)^{\langle-1\rangle}$ not necessarily $0$ and look for some transformation $T$ s.t. $\Delta \big(T(f_0)\big)$ is closer to $0$ and so on). In particular for the case $\mathcal{A}:=\mathcal{C}^{\infty}(\mathbb{R})$ where inversion is symmetry w.r.t the diagonal in the plane $\mathbb{R}\times \mathbb{R}$ where we represent the graph of the function.


Trick to give an algebraic formulation of the argument based on monotonicity: define the action of translation on functions. If we want to break it into small steps, we can define translation by a constant $h\in \mathbb{R}$ acting on $\mathcal{A}:= \mathcal{C}^{\infty}(\mathbb{R})$ $$\tau_h: \left\lbrace \begin{aligned} \mathcal{A}\quad & \longrightarrow \quad \mathcal{A}\\ "f(t)" & \longmapsto "f(t-h) "\end{aligned} \right. \in GL(\mathcal{A}) $$ More generally the action of the translation group is defined by the group morphism $$\left\lbrace \begin{aligned} \big(\mathbb{R},+\big) & \longrightarrow \big(GL(\mathcal{A}),\circ \big)\\ h\quad & \longmapsto\quad \tau_h \end{aligned} \right. \label{A}\tag{A}$$ To imitate the other reasoning, based on the formula for the derivative of a composition (\ref{1}-\ref{2}), we need to show that left hand side and r.h.s. of (\ref{Eq}) do not behave in the same way under (\ref{A}).

8
On

Disclaimer. In view of comments issued, and by growing insight, this answer must be considered as a partial one.

The easiest way is to start with considering the derivatives. Let $g(x)=f'(x)$. Then $g(x)$ must have an inverse $g^{-1}(x)$. This means that $y=g(x)$ is monotonically increasing or decreasing. $$ \mbox{Increasing:} \quad \begin{cases} x_1 \lt x_2 \quad \Longleftrightarrow \quad y_1 \lt y_2 \\ x_1 \gt x_2 \quad \Longleftrightarrow \quad y_1 \gt y_2 \end{cases} \\ \mbox{Decreasing:} \quad \begin{cases} x_1 \lt x_2 \quad \Longleftrightarrow \quad y_1 \gt y_2 \\ x_1 \gt x_2 \quad \Longleftrightarrow \quad y_1 \lt y_2 \end{cases} $$ The inverse $g^{-1}(x)$ is obtained (graphically) by mirroring $g(x)$ in the line $y=x$, thus by exchanging $x$ and $y$. From this it clear that $g(x)$ and $g^{-1}(x)$ must be both monotonically increasing or both be monotonically decreasing.
The same considerations are valid for $f(x)$ and $f^{-1}(x)$ as well, because these two are each others inverse too.
But $g(x)$ is also a derivative. If this derivative would become zero somewhere, then $f(x)$ would no longer be monotonic at that place. So $g(x)$ must be nonzero everywhere in its domain. For the inverse $g^{-1}(x)$ this means that there cannot be an intersection with the $y$-axis. Without loss of generality, we may assume that both $g(x)$ and $g^{-1}(x)$ are defined in the first quadrant, for $x \gt 0$ and $y \gt 0$. Other cases are covered by re-defining $g$ using symmetry: $g(x) := g(-x)$ , $g(x) := -g(x)$ , $g(x) := -g(-x)$.
Before proceeding, a few examples will be presented. First example: $$ f'(x) = g(x) = 1/x \quad ; \quad [f'(x)]^{-1} = g^{-1}(x) = 1/x \\ f(x) = \int_1^x\frac{dt}{t} = \ln(x) = y \quad ; \quad x = \ln(y) \; \Rightarrow \; f^{-1}(x) = e^x \quad ; \quad [f^{-1}]'(x) = e^x $$ It is noticed that $[f'(x)]^{-1}$ is monotonically decreasing while $[f^{-1}]'(x)$ is monotonically increasing. Second example: $$ f'(x) = g(x) = \sqrt{x} \quad ; \quad [f'(x)]^{-1} = x^2 \\ f(x) = \int_0^x\sqrt{t}\,dt = \frac{2}{3}x^{3/2} \quad ; \quad x = \frac{2}{3}y^{3/2} \; \Rightarrow \; f^{-1}(x) = \left(\frac{3x}{2}\right)^{2/3} \quad ; \quad [f^{-1}]'(x) = \left(\frac{3x}{2}\right)^{-1/3} $$ It is noticed that $[f'(x)]^{-1}$ is monotonically increasing while $[f^{-1}]'(x)$ is monotonically decreasing.
A pattern is emerging. Let's consider now the general case for $g(x)=f'(x)$ monotonically decreasing. It will be assumed that the reader is able to construct the analogue for monotonically increasing $g(x)$. Then $[f'(x)]^{-1}$ is monotonically decreasing as well. Define the function $f(x)$ as a definite integral in statu nascendi (a Riemann sum). For example as: $$ \sum_i g\left(x_i\right)\left(x_{i+1}-x_i\right) \approx \int_a^x g(t)\,dt = f(x) $$ And select the subdivisions in such a way that $$ g\left(x_i\right)\left(x_{i+1}-x_i\right) = \mbox{constant} = dA $$ Then we can construct the inverse function $f(x)$ by numerical approximation. It is clear thar if $g(x_i)$ is decreasing, then the area $dA$ can only kept constant by increasing the intervals $\left(x_{i+1}-x_i\right)$. This means that the intervals of $f^{-1}(x_i)$ must be increasing, because of $\left(x_{i+1}-x_i\right)\to\left(y_{i+1}-y_i\right)$. But then the derivative of $f^{-1}(x_i)$ is equal to $\left(y_{i+1}-y_i\right)/dA$, which means that $[f^{-1}]'(x)$ is increasing numerically, therefore analytically when taking limits. So here comes our end-result, apart from technicalities to filled in by the questioner eventually.

Theorem.
$[f'(x)]^{-1}$ is monotonically decreasing if and only if $[f^{-1}]'(x)$ is monotonically increasing.
$[f'(x)]^{-1}$ is monotonically increasing if and only if $[f^{-1}]'(x)$ is monotonically decreasing.
Therefore it is impossible to find (real valued) functions $f$ with $[f'(x)]^{-1}=[f^{-1}]'(x)$.

EDIT.

Below is a visualization of the technique employed, for the special example $f'(x)=[f'(x)]^{-1}=1/x$ , $\color{red}{f(x)=\ln(x)}$ , $\color{green}{f^{-1}(x)=[f^{-1}]'(x)=e^x}$ .
Further specifications: $-0.1 \le x \le 9$ ; $-0.1 \le y \le 9$ ; $dA = 1/10$ ;
thick colored lines numerically, thin colored lines analytically.

enter image description here

BUG FIX.

The domains of interest are much trickier than I thought.

Counter example. Consider $f(x)=1/x$, defined for $x\gt 0,y\gt 0$. Then $f'(x)=-1/x^2$ is defined for $x\gt 0,y\lt 0$; $[f'(x)]^{-1}=\sqrt{-1/x}$ is defined for $x\lt 0,y\gt 0$ and increasing; $f^{-1}(x)=1/x$ is defined for $x\gt 0,y\gt 0$; $[f^{-1}]'(x)=-1/x^2$ is defined for $x\gt 0,y\lt 0$ and increasing.
So we have a problem with the first two statements of our Theorem.

In general. Consider $f(x)$, defined for $x\gt 0,y\gt 0$ and monotonically decreasing. Then $f'(x)$ is defined for $x\gt 0,y\lt 0$ (i.e. negative) and monotonically increasing. $[f'(x)]^{-1}$, because of the mirroring in $y=x$, is defined for $x\lt 0,y\gt 0$ and monotonically increasing as well. $f^{-1}(x)$, because of the mirroring in $y=x$, is defined for $x\gt 0,y\gt 0$ and monotonically decreasing. So $[f^{-1}]'(x)$ is defined for $x\gt 0,y\lt 0$ and monotonically increasing.
However, because of the two different domains $[f^{-1}]'(x)$ cannot be identical to $[f'(x)]^{-1}$.
Conclusion: the last statement of our Theorem remains valid, but the first two statements, in general, are wrong.

Note. A better numerical approximation of the integral sometimes can be obtained with the trapezium rule: $$ \frac{g\left(x_i\right)+g\left(x_{i+1}\right)}{2}\left(x_{i+1}-x_i\right) = \mbox{constant} = dA = \frac{1}{n} $$ But it does work only if $g\left(x_{i+1}\right)$ can be predicted, given the value of $dA$. In case $g(x)$ is an orthogonal hyperbola, prediction is successful: $$ x_{i+1}-x_i=\frac{1}{n}\frac{2}{1/x_{i+1}+1/x_i}=\frac{1}{n}\frac{2x_{i+1}x_i}{x_{i+1}+x_i} \quad \Longleftrightarrow \\ n\left(x_{i+1}-x_i\right)\left(x_{i+1}+x_i\right)-2x_{i+1}x_i=nx_{i+1}^2-2x_ix_{i+1}-nx_i^2=0 \quad \Longleftrightarrow \\ x_{i+1} = \frac{x_i}{n}\pm\sqrt{\left(\frac{x_i}{n}\right)^2+x_i^2}\quad \Longrightarrow\quad x_{i+1}=x_i\left(\frac{1}{n}+\sqrt{1+\frac{1}{n^2}}\right) $$

3
On

I foud this video about complex solutions to the problem of the form $Cx^N$:

$$\frac{d}{dx} f^{\langle -1 \rangle}(x) = \left(\frac{d}{dx} f(x) \right)^{\langle -1 \rangle} \quad \to \; \frac{ x^\frac{1-N}{N}}{NC^\frac{1}{N}} = \left(\frac{x}{CN} \right)^\frac{1}{N-1}$$

This only holds when $\quad \frac{1-N}{N} = \frac{1}{N-1} \qquad N^2-N+1=0 \qquad N = \frac{1 \, \pm \, i \sqrt{3}}{2} = e^{\pm \frac{i \pi}{3}}$

and $\quad NC^\frac{1}{N} = (CN)^\frac{1}{N-1} \qquad C = N^{(N-2)N}= \pm \, ie^{\pm \frac{\pi}{2\sqrt{3}}} \;$ or $\; e^{\pm \frac{\pi}{\sqrt{3}}}$