Proving a general statement about sequences and continous functions.

63 Views Asked by At

It is fairly common in a first course on Analysis to prove that if $\{a_i\}$, $\{b_i\}$ are infinite sequences $ℕ→ℝ$ that converge to the points $a, b \in ℝ$ respectively then the sequence $\{a_i+b_i\}$ converges to the point $a+b \in ℝ$. The same idea works with subtraction, multiplication, division under non-zero elements, etc.

I believe that there might be a theorem that can work as a tool to prove all the previous statements in a general manner. If I'm correct, it would go something like this:

Let $(X, d)$ be a metric space,

let $\{f_1, \ldots, f_n\}$ be a set of convergent sequences $ℕ→X$ such that $f_i(j)\in Y \subseteq X$ for all $i \le n, j \ge N \in ℕ$ and $\lim \limits_{x \to ∞}f_i(x)=F_i \in Y$,

and let $g:X^n→X^m$ be a function continuous on $Y^n$.

Then $$\lim \limits_{x_1 \to ∞ , \ldots, x_n \to ∞}g(f_1(x_1), \ldots, f_n(x_n)) = g(F_1, \ldots, F_n)$$


However since I'm fairly new to Analysis I do not know how to prove such general claim, and I was wondering if anyone knew of a complete formal proof of it.

Any help/thoughts would be really appreciated.

1

There are 1 best solutions below

0
On

I will use the following two definitions.

  • If $(X,d)$ is a metric space, then $(X^n,d^n)$ is the metric space whose points are the cartesian product $X\times X\times \cdots \times X$, where $d^n((x_1,\dots,x_n),(y_1,\dots,y_n))=\max_i d(x_i,y_i)$.

  • Let $h:\mathbb N^m\to X$. We say $\lim_{x_1\to\infty,\dots,x_m\to\infty}h(x_1,\dots,x_m)=L$ if for all $\epsilon>0$ there exists a $N\in \mathbb N$ so $\min_i x_i\ge N$ implies $|h(x_1,\dots,x_n)-L|<\epsilon$.

$g$ being continuous at $(F_1,\dots,F_n)$ means that for all $\epsilon>0$, there is a $\delta>0$ so $$\max_i |y_i-F|<\delta \implies \max_j|g(y_1,\dots,y_n)_j-g(F_1,\dots,F_n)_j|<\epsilon.$$ So, given $\epsilon>0$, choose such a $\delta$, and then for each $1\le i \le n$, choose as index $N_i$ so that $$x\ge N_i \implies |f_i(x)-F_i|<\delta.$$

Letting $N=\max_i N_i$, then combining the last two paragraphs (with $y_i=f_i(x_i)$) shows $$ \min_i x_i\ge N\implies |g(f_1(x_1),\dots,f_n(x_n))_j-g(F_1,\dots,F_n)_j|<\epsilon $$ This is precisely the definition of $\lim_{x_1,x_2,\dots,x_n\to\infty} g(f_1(x_1),\dots,f_n(x_n))=g(F_1,\dots,F_n)$.