Understanding Baker and Rippon's proof of a result in iteration theory

538 Views Asked by At

Let $a \in \mathbb{C}, b = e^a, T(z) = e^{az}$. Define $W_n = T^n(1)$ where $T^n$ is the nth iterate of $T$. The main result that motivated asking this question is Theorem 1 below.

Note: in Theorems 2,3,4 and in the proof of Theorem 1, $f$ is a non-linear entire function; $\mathscr{F}(f)$ is the set of points in the complex plane in whose neighborhood the sequence $f^n$ fails to be a normal family; and $\mathscr{C}(f)$ is the complement of $\mathscr{F}(f)$

Theorem 1: If $a = te^{-t}, |t| = 1$ and $t^k = 1$ for some $k \in \mathbb{N}$ then $W_n$ converges to $e^t$

Proof: Suppose that $a=te^{-t}$ with $|t|=1$ so that $e^t$, is a fixed point of $T(z)= e^{az}$ with multiplier $t$. Since $te^{-t}$ is univalent in $|t|=1$ there is only one such $t$ for a given $a$ and $e^t$ is the only possible limit for $W_n$. By Theorem 2, $e^t$ belongs to a component $D$ of $\mathscr{C}(T)$ which contains the only singular point of $T^{-1}$, namely the origin. But $T(D) \subset D$ by Theorem 4 so that $1 \in D$ and thus $W_n= T^n(1)$ converges to $e^t$.

I am trying to figure out exactly where Baker and Rippon have used the assumption that $t$ is a root of unity. This is important to me because the sequence $W_n$ does not converge if $|t| = 1$ but $t$ is not a root of unity, and I am struggling to understand how the addition or removal of this one condition produces such a drastic change in the behavior of $W_n$. I believe the fact that $t$ is a root of unity is used in Theorem 2.

Theorem 2: If $\alpha$ is a fixed point of $f$ such that $f'(\alpha)$ is a root of unity, then $\alpha \in \mathscr{F}(f)$ but $\alpha$ lies on the boundary of one or more components $D$ of $\mathscr{C}(f)$ in which $f^n \to \alpha$ as $n \to \infty$, and at least one such $D$ contains a singularity of $f^{-1}$.

Theorem 3: For any integer $p > 1, \mathscr{F}(f) = \mathscr{F}(f^p)$

The proof of Theorem 2 relies on being able to study the iteration of $F = f^p$ instead of $f$ itself, since by Theorem 3 $\mathscr{F}(F) = \mathscr{F}(f)$.

Theorem 4: $\mathscr{C}(f)$ and $\mathscr{F}(f)$ are completely invariant under $f$ in the sense that if $z\in \mathscr{C}(f)$ then $f(z)\in \mathscr{C}(f)$, and if further $f(w) = z$ then $w \in \mathscr{C}(f)$

To be clear, I am asking for the proofs of Theorems 3 and 4. This is because the authors state them without proof, and because I consider them important in understanding the proof of Theorem 1.

Note: all results are taken from I. N. Baker and P. J. Rippon's article Convergence of infinite exponentials, Ann. Acad. Sci. Fenn. 8, 179–186 (1983), DOI: 10.5186/aasfm.1983.0805.

1

There are 1 best solutions below

12
On BEST ANSWER

Let's introduce some further notation first. For a domain $G \subset \mathbb{C}$ and a family $\mathscr{S} \subset \mathscr{O}(G)$, let

$$\mathscr{N}(\mathscr{S}) := \bigl\{ z \in G : \bigl(\exists r > 0\bigr)\bigl(\mathscr{S}\lvert_{D_r(z)} \text{ is normal}\bigr)\bigr\},$$

where $\mathscr{S}\lvert_A$ is the restriction of $\mathscr{S}$ to $A$, $\mathscr{S}\lvert_A = \{ f\lvert_A : f \in \mathscr{S}\}$. We next make some observations whose proofs should be easy to complete.

Since every subfamily of a normal family is normal, we have

$$\mathscr{S}_1 \subset \mathscr{S}_2 \implies \mathscr{N}(\mathscr{S}_2) \subset \mathscr{N}(\mathscr{S}_1).\tag{1}$$

Further, if $\mathscr{S}\lvert_A$ is normal and $B \subset A$ then $\mathscr{S}\lvert_B$ is normal too. Also, the union of finitely many normal families (of holomorphic functions on the same domain) is normal, thus it follows that

$$\mathscr{N}(\mathscr{S}_1) \cap \mathscr{N}(\mathscr{S}_2) \subset\mathscr{N}(\mathscr{S}_1 \cup \mathscr{S}_2).$$

The opposite inclusion follows from $(1)$, so overall we have

$$\mathscr{N}(\mathscr{S}_1) \cap \mathscr{N}(\mathscr{S}_2) = \mathscr{N}(\mathscr{S}_1 \cup \mathscr{S}_2).\tag{2}$$

Next we note that if $h \colon \Omega \to G$ is holomorphic, then

$$h^{-1}(\mathscr{N}(\mathscr{S})) \subset \mathscr{N}(\mathscr{S}\circ h),\tag{3}$$

where $\mathscr{S}\circ h = \{ f\circ h : f \in \mathscr{S}\}$. For if $\mathscr{S}$ is normal on $D_r(h(w))$, then $\mathscr{S}\circ h$ is normal on $h^{-1}(D_r(h(w)) \supset D_{\rho}(w)$ for a suitable $\rho > 0$. If $h$ isn't constant on any component of $\Omega$, then we also have

$$h(\mathscr{N}(\mathscr{S}\circ h)) \subset \mathscr{N}(\mathscr{S}).\tag{4}$$

For a sequence $(f_k)$ in $\mathscr{S}$ converges uniformly on $h(A)$ if and only if the sequence $f_k \circ h$ converges uniformly on $A$, and by the open mapping theorem, $h(U)$ is a neighbourhood of $h(w)$ for every neighbourhood $U$ of $w$.

Now we apply these observations to the situations considered in theorems 3 and 4. We note that $\mathscr{C}(f) = \mathscr{N}(\{ f^n : n \in \mathbb{N}\})$ by definition, and observe that by $(1),\, (2)$ and the fact that every finite family is normal we also have $\mathscr{C}(f) = \mathscr{N}(\{ f^n: n \in \mathbb{N},\, n \geqslant k\})$ for every $k \in \mathbb{N}$.

We first prove theorem 4. Immediately from $(3)$ and the previous observation, we obtain

$$f^{-1}(\mathscr{C}(f)) = f^{-1}(\mathscr{N}(\{ f^n : n \in \mathbb{N}\})) \subset \mathscr{N}(\{ f^n : n \in \mathbb{N}\} \circ f) = \mathscr{N}(\{ f^n : n \in \mathbb{N},\, n \geqslant 1\}) = \mathscr{C}(f),$$

which is the second assertion in that theorem. For the first assertion, if $f$ is non-constant, we use $(4)$ instead of $(3)$:

$$f(\mathscr{C}(f)) = f(\mathscr{N}(\{ f^n : n \in \mathbb{N},\, n \geqslant 1\})) = f(\mathscr{N}(\{ f^n : n \in \mathbb{N}\}\circ f)) \subset \mathscr{N}(\{ f^n : n \in \mathbb{N}\}) = \mathscr{C}(f).$$

If $f$ is constant, the assertion follows immediately from $\mathscr{C}(f) = \mathbb{C}$.

To prove theorem 3, we first observe that by $(2)$ we have

$$\mathscr{C}(f) = \mathscr{N}\Biggl(\bigcup_{k = 0}^{p-1} \{ f^{pn + k} : n \in \mathbb{N}\}\Biggr) = \bigcap_{k = 0}^{p-1} \mathscr{N}(\{ f^{pn+k} : n \in \mathbb{N}\}).$$

This yields $\mathscr{C}(f) \subset \mathscr{C}(f^p)$ (from $k = 0$), and to obtain the equality, we must further show

$$\mathscr{C}(f^p) \subset \mathscr{N}(\{ f^{pn+k} : n \in \mathbb{N}\})$$

for $1 \leqslant k \leqslant p-1$. Fix such a $k$, and let $z \in \mathscr{C}(f^p)$. Since $\mathscr{C}(f^p)$ is open, we can find $r > 0$ such that $K := \overline{D_r(z)} \subset \mathscr{C}(f^p)$. Let $(g_m)$ be an arbitrary sequence in $\{ f^{pn+k} : n \in \mathbb{N}\}$. Then $g_m = f^{pn_m + k}$ for a sequence $(n_m)$ in $\mathbb{N}$, and by definition of $\mathscr{C}(f^p)$, a subsequence of $(f^{pn_m})$ converges uniformly on $K$. Without loss of generality, we can assume the full sequence does. Let the limit function be $g$, and set $L_0 = g(K)$. Let $L = \{ z \in \mathbb{C} : \operatorname{dist}(z,L_0) \leqslant 1\}$. Then $L$ is compact, hence $f^k$ is uniformly continuous on $L$, and we have $f^{pn_m}(K) \subset L$ for all large enough $m$. This implies that the sequence $(g_m)$ converges uniformly on $K$ - we have

$$\max \{\lvert g_{m_1}(z) - g_{m_2}(z)\rvert : z \in K\} \leqslant \omega (\max \{ \lvert f^{pn_{m_1}}(z) - f^{pn_{m_2}}(z)\rvert : z \in K\})$$

for $m_1,m_2$ large enough that $f^{pn_{m_i}}(K) \subset L$, where $\omega$ is the modulus of continuity of $f^k\lvert_L$,

$$\omega(\delta) = \max \{ \lvert f^k(x) - f^k(y)\rvert : x,y \in L, \lvert x-y\rvert \leqslant \delta\}.$$

Therefore, we have $z \in \mathscr{N}(\{ f^{pn+k} : n \in \mathbb{N}\})$. Since $z$ and $k$ were arbitrary, it follows that

$$\mathscr{C}(f^p) \subset \bigcap_{k = 0}^{p-1} \mathscr{N}(\{ f^{pn+k} : n \in \mathbb{N}\}) = \mathscr{C}(f),$$

and theorem 3 is proved.