Why does rational dependence of $f'$ on $f$ imply that $f$ is real-analytic?

166 Views Asked by At

Let $f:(0,1) \to \mathbb{R}$ be a smooth function, satisfying the ODE $$ f'=\frac{P(f)}{Q(f)}, $$ for some polynomials $P,Q$. We assume that $Q(f)(x)=Q(f(x))$ does not vanish on $(0,1)$.

How to prove that $f$ is real-analytic?


I know that an equivalent condition is that for every compact set $K \subset (0,1)$ there exists a constant $C$ such that for every non-negative integer $n$ the following estimate holds on $K$: $$ \left| f^{(n)} \right| \leq C^{n+1} n!$$

I tried to prove this estimate by induction, but got stucked.

We have $$ f''=\frac{P'(f)f'Q(f)-Q'(f)f'P(f)}{Q^2(f)}, $$ so if $|f| \le C, |f'| \le C^2$, then $$ |f''| \le C^2(\frac{|P'|}{|Q|}+\frac{|Q'P|}{|Q^2|})\le C^2(\frac{|P'|}{|Q|}+C^2\frac{|Q'|}{|Q|}), $$ where in the last inequality we have used the assumption $|f'|=|\frac{P(f)}{Q(f)}| \le C^2$.

So, to prove that $|f''| \le 2C^3$, it suffices to prove that $$ \frac{|P'|}{|Q|}+C^2\frac{|Q'|}{|Q|} \le 2C. $$ I am not sure exactly how to choose a suitable constant $C$.

2

There are 2 best solutions below

5
On BEST ANSWER

As already shown in the nice answer of Paul Garrett, this is true if $P$ has no zeros by a clever application of the (analytic) inverse function theorem. It is still true in general, however, we have work a bit more (though I believe that we should be able to extend Paul Garrett's argument to the general case as well, even if I don't see precisely how to do it).

We can take a step back and directly prove that any solution of $$ \begin{cases}\varphi'(x)= F(\varphi(x)) \\ \varphi(x_0)=\varphi_0 \end{cases} $$ with $F$ real-analytic is real-analytic as well. For this we resort to the usual Picard-Lindelöf theorem and twist it to get the real-analytic version.

Pick a radius $R>0$ such that $F$ admits a analytic extension to $B_{2R}(\varphi_0)\subseteq \mathbb{C}$ and call this extension $G$. Recall that solving the ODE above is equivalent to solving the following integral equation $$ \varphi(x) = \varphi_0 + \int_{x_0}^x F(\varphi(t))dt. $$ Now we want to use the Banach fixed point theorem to get a (unique solution). We define our metric space to be $$ X:=\{ h: B_M(x_0) \rightarrow \mathbb{C} \ : \ h \text{ analytic}, h(x_0)=\varphi_0, \sup_{z\in B_M(x_0)} \vert h(z) - \varphi_0\vert \leq R \}, $$ where we are going to choose $M$ latter on. Why do we pick exactly this space? Well, we build in the features we want from our solution: Analyticity, the initial condition $\varphi(x_0)=\varphi_0$ and the last condition ensures that our integral equation makes sense ($G(\varphi(z))$ is defined for $z\in B_M(x_0)$).

Together with the the metric induced by supremum norm this forms a complete metric space (as analyticity is preserved under convergence in the supremum norm as for example shown here Uniform limit of holomorphic functions). As in the usual proof for the Picard-Lindelöf theorem, we define the integral operator $$ T: X \rightarrow X, T(\varphi)(z) = \varphi_0 + \int_{x_0}^z G(\varphi(t))dt. $$ We need to choose $M$ sufficiently small to ensure that $T$ lands in $X$ again (see the next computation). We compute for $z\in B_M(x_0)$ $$ \vert T(\varphi)(z) - T(\psi)(z) \vert = \vert \int_{x_0}^z \big(G(\varphi(t)) - G(\psi(t))\big) dt \vert \leq \left(\sup_{\zeta\in B_R(\varphi_0)} \vert G'(\zeta) \vert \right) \cdot \left\vert\int_{x_0}^z \vert \varphi(t) - \psi(t))) dt \right\vert \leq \left(\sup_{\zeta\in B_R(\varphi_0)} \vert G'(\zeta) \vert \right) M \sup_{y\in B_M(x_0)} \vert \varphi(y) - \psi(y) \vert. $$ Thus, for $M$ sufficiently small, we get that $T$ is a contraction and hence, admits a unique fixed point by the Banach fixed point theorem. Therefore, locally, we have a unique analytic solution of of our ODE. We now would like to say that this solution is also real-valued for real values. This follows from the fact that $G=F$ on the real line. However, the usual Picard-Lindelöf theorem tells us, that the ODE admits (locally) a unique $C^1$ solution, which implies that the $C^1$ solution is in fact real-analytic.

3
On

One understanding of this set-up is reminiscent of the way sine and cosine (and elliptic functions) arise by taking inverse functions to the functions defined by integrals.

One difficulty in giving elementary estimates for convergence is that (in effect) the zero-set of a power series is pretty hard to predict from the power series coefficients. And, indeed, the argument here involves inverting a function.

Letting $g$ be the inverse function to $f$ (locally...), the differential equation for $g$ turns out to be $g'(z)=Q(z)/P(z)$ (yes, inverse on the right). This is readily solved by integration: $g(z)=\int_{z_o}^z Q(w)/P(w)\,dw$. This integral presentation shows that $g$ is holomorphic. Indeed, the rational function $P/Q$ could be replaced by any holomorphic function.

Then the holomorphic inverse function theorem gives holomorphic $f$ (locally!).

EDIT: prompted by @Severin Schraven's apt comment... first, the discussion just above does not make any effort to determine bad points, nor to see the implications of the assumption that the solution $f$ exists on a particular interval. This is the kind of thing that seems non-trivial to me... so I hesitate to say more, for fear of saying something misleading. Indeed, this argument does not preclude the possibility (so far as I can see) that there might be finitely-many points on the interval where $f$ does fail to be holomorphic. I'd have to think a lot more.