How to conclude that there exists subsequence that converges for all $s\in S$?

151 Views Asked by At

Let's say that I have a domain $G\subseteq \mathbb{C}\approx \mathbb{R}^2$ and a countable dense subset $S\subseteq G\subseteq \mathbb{C}$ and that the sequence $(f_n(s))_{n\in \mathbb{N}}$ where $f_n:G\to \mathbb{R}$ is bounded for each $s\in S$. . How can I formally state, using the diagonal argument, that there exists a subsequence $(f_{n_{j}})_{j\in \mathbb{N}}$ such that $(f_{n_{j}}(s))_{j\in \mathbb{N}}$ is convergent for each $s\in S$?

If you could use a simple wellknown subsequence notation for your answer I would be grateful. Also why is it called the diagonal argument?

I understand the basic of it i.e. $f_n(s_1)$ is bounded $(f_{n_{j}}(s_1))_{j\in \mathbb{N}}$ convergent. $(f_{n_{j}}(s_2))_{j\in \mathbb{N}}$ bounded $(f_{n_{j_{m}}}(s_3))_{m\in \mathbb{N}}$ convergent etc, but I can't conclude that there exists one that for all?

2

There are 2 best solutions below

7
On BEST ANSWER

I will modify your notation slightly to make the diagonal portion of the argument clear.

As you mentioned, there exists some subsequence of $(f_n)$ such that $(f_{n_k}(s_1))$ is a convergent sequence. We can relabel this subsequence as $(f_{1, k})_{k\in\mathbb{N}}$ instead of $(f_{n_k})_{k\in\mathbb{N}}$ (I chose to label it this way so we know that this sequence is convergent on $s_1$).

Now, within this subsequence of the original sequence of functions, we can find another subsequence that is convergent on $s_2$, as you already observed. Let's call this sequence of functions $(f_{2, k})$, with the same reasoning for the naming as before.

We can continue this process of finding subsequences in previous subsequences ad infinitum, constructing a subsequence of functions $(f_{n, k})_{k \in \mathbb{N}}$ for any $n \in \mathbb{N}$. By construction, $(f_{n, k}(s_n))_{k \in \mathbb{N}}$ is convergent.

Now, here's the diagonal portion of the argument. We can visualize all these subsequences as a matrix,

$$ \begin{matrix} f_{1, 1} & f_{1, 2} & f_{1, 3} & \cdots\\ f_{2, 1} & f_{2, 2} & f_{2, 3} & \cdots\\ f_{3, 1} & f_{3, 2} & f_{3, 3} & \cdots\\ \vdots & \vdots & \vdots & \ddots \end{matrix} $$

where by construction, each row is a subsequence of the row above it. Now, consider the diagonal sequence $(f_{k, k})_{k \in \mathbb{N}}$ and choose any $s_j$. Note that the sequence $f_{j, j}(s_j), f_{j+1, j+1}(s_j), \dots$ is a subsequence of the sequence we constructed specifically for $s_j$ (again, by the construction). Therefore, the diagonal sequence is convergent on $s_j$, since all but a finite number of terms are part of a convergent sequence.

Thus, the sequence $(f_{k, k})_{k \in \mathbb{N}}$ is the desired subsequence such that for any $s \in S$, $(f_{k, k}(s))_{k \in \mathbb{N}}$ is convergent.

0
On

The way you "state" is formal enough. I take it you wanted to mean how to formally prove it, correct?

The proof itself is why it is called the diagonal argument; you essentially go down a diagonal to construct the subsequence: you write in for each $s_i\in S$, a line consisting of a subsequence of functions (of the previous line) for which the evaluation at $s_i$ is a convergent sequence, and then take a "diagonal" as the final subsequence. With regards to notation in order to make this "formal", my personal opinion is that the best course of action is to abandon the usual notation momentarily, since you get in to a lot of messy indices if you adhere to it, and use the terminology of "increasing reindexations": a subsequence of $x: \mathbb{N} \to X$ is a composition of $x$ with an increasing function $u$.

Let $F$ be the original sequence of functions (i.e., $F: \mathbb{N} \to \mathcal{F}(G;\mathbb{R})$). Enumerate $S$. The sequence $i \mapsto \big(F(i)\big)(s_1)$ is bounded, thus has a convergent subsequence (i.e., there exists a increasing function $u_1: \mathbb{N} \to \mathbb{N}$ such that $(F \circ u_1)(s_1)$ is convergent). At the $n$-th step, we have the sequence $i \mapsto \big((F\circ u_{1}\circ \cdots \circ u_{n-1})(i)\big)(s_n)$ (this here is why the notation would get terribly messy), and the same reasoning applies to get an increasing function $u_n$.

At the end of this inductive procedure, we consider the increasing* function $U$ given by $$U: \mathbb{N} \to \mathbb{N}$$ $$i \mapsto (u_1 \circ \cdots \circ u_{i})(i).$$ Note that this is where the "diagonal" is hidden. Since $(F \circ U)|_{\{n \in \mathbb{N}\mid n \geq i \}}$ is a subsequence of every $F \circ u_i$**, its evaluation at $s$ converges for every $s \in S$.

Final mandatory observation: there is a reason why most books don't do it this way, which is that this obscures the idea a little, which is really just a diagonal argument.

*To see why this is increasing, note that if $n \geq m$, \begin{align*} U(n) &=(u_1 \circ \cdots \circ u_n (n) \\ &= (u_1 \circ \cdots \circ u_{n-1})(u_n(n))\\ & \geq (u_1 \circ \cdots \circ u_{n-1})(n) \\ & \geq \cdots \\ & \geq (u_1 \circ \cdots \circ u_{m})(n) \\ & \geq (u_1 \circ \cdots \circ u_{m})(m) \\ &=U(m) \end{align*}

**To see why this is true, it suffices to prove that for every fixed $i$, there exists an increasing function $s$ such that $$(F \circ U)|_{\{n \in \mathbb{N}\mid n \geq i \}}= F \circ u_i \circ s.$$ This is saying that for every $k \geq i$, $$F(U(k))=F(u_i(s(k))).$$ But $F(U(k))=F((u_1\circ \cdots \circ u_k)(k))$. Thus, it suffices to build an increasing $s$ such that $(u_1 \circ \cdots \circ u_k)(k)=u_i(s(k))$. Of course, such $s$ is forced, and must be $s(k)=u_i^{-1}((u_1 \circ \cdots \circ u_k)(k))$. Being the inverse of an increasing function, $u_i^{-1}$ must be increasing. Since compositions of increasing functions are increasing, then $s:=u_i^{-1} \circ U$ is increasing.