Attracting fixed points of linear maps

931 Views Asked by At

Let $f$ be a map on $\mathbb{R}^n$ and let $p \in \mathbb{R}^n$ be a fixed point. If there is an $\epsilon > 0$ such that for all $v$ in the $\epsilon$-neighborhood $N_\epsilon(p)$, $\lim_{k \to \infty} f^k(v) = p,$ then p is a $\textbf{sink}$ or $\textbf{attracting fixed point}$.

Let $A$ be a linear map on $\mathbb{R}^n$. I can't show that

The origin is a sink if all eigenvalues of $A$ are smaller than one in absolute value.

Can someone give me a hint?

1

There are 1 best solutions below

0
On BEST ANSWER

The big tool that we want to use is the Banach fixed point theorem, which can be stated as follows:

Theorem: Let $(X,d)$ be a nonempty complete metric space, and suppose that $f : X \to X$ is a contraction mapping; that is, suppose that there is some $c \in (0,1)$ such that $$ d(f(x), f(y)) \le c d(x,y) $$ for all $x,y\in X$. Then there is a unique point $x_0 \in X$ such that $f(x^*) = x^*$.

The proof is not insurmountable and follows from a relatively straightforward intuition: if you start with any initial point $x_0$ let $x_{n+1} = f(x_{n})$, then $\lim_{n\to \infty} x_n = x^*$ is a reasonable candidate for the fixed point. Since $f$ is a contraction mapping, it follows that $$ d(x_n, x_{n+1}) \le c d(x_{n-1}, x_n) \le \dotsb \le c^n, $$ which can be used to show that $(c_n)_{n\in \mathbb{N}}$ is a Cauchy sequence, and therefore converges. Uniqueness requires a bit more work, but also isn't terribly difficult. Keith Conrad gives a more complete exposition if you are curious.

Now, suppose that $A : \mathbb{R}^n \to \mathbb{R}^n$ is a linear map, and suppose that all of the eigenvalues of $A$ are between $-1$ and $1$. We would like to show that $0$ is the unique attracting fixed point of $A$. Since we know that $0$ is a fixed point of $A$ (since $A0 = 0$ for any linear map $A$), it remains only to show that it is the unique fixed point. For this, we want to invoke the Banach fixed point theorem, which means that we need to show that $A$ is a contraction mapping.

To that end, we might want to introduce a little bit of language from functional analysis.

Definition: Let $(X, \|\cdot\|_X)$ be a normed vector space, and let $A : X \to X$ be a linear ma, i.e. suppose that $A(x+y) = Ax + Ay$ for all $x,y\in X$ and that $A(\alpha x) = \alpha (Ax)$ for all $\alpha$ in the base field and $x \in X$. Then the operator norm of $A$ is defined to be $$ \|A\|_{op} := \sup_{\|x\|_X = 1} \|Ax\|_X = \sup_{x \ne 0} \frac{ \|Ax\|_X }{ \|x\|_X }. $$

This can be generalized to define the norm of an operator $A : X \to Y$ where $X$ and $Y$ are both normed vector spaces, but we don't really need that generality here. The basic idea of this proof is that you can define the "size" of an operator by look at what it does to all of the unit vectors. The operator's norm corresponds to the largest quantity by which it scales a unit vector.

Indeed, we are actually interested in a more specific case: if $A : \mathbb{R}^n \to \mathbb{R}^n$ is a linear map, then $$ \| A \|_{op} = \sup_{\|x\| = 1} \| Ax \| = \sup_{x \ne 0} \frac{ \|Ax\|}{\|x\|} = \sup_{x\ne 0} \frac{d(Ax, 0)}{d(x,0)}, $$ where $\|x\| = d(x,0)$ is the Euclidean metric $$ \| x \| = \left( \sum_{k=1}^{n} |x_k|^2 \right)^{1/2}. $$ Hence for any $x \in \mathbb{R}^n$, we have $$ \|Ax\| \le \|A\|_{op} \|x\|. $$ Notice that with a bit of manipulation, this gives us $$ \| Ax - Ay \| = \| A(x-y) \| \le \|A\|_{op} \| x-y \|, $$ hence if $\|A\|_{op} < 1$, then $A$ is a contraction mapping and so we may invoke the Banach fixed point theorem. It remains only to show that the operator norm has something to do with the eigenvalues.

HERE BE DRAGONS

This is actually a little delicate, and the following may be slightly incomplete or possibly not entirely correct (this is not theory that I deal with very often, and it is possible that I have made a really stupid mistake toward the end, but I think it checks out). If $A$ is Hermitian (since $A$ is an operator on $\mathbb{R}^n$, this implies that $A$ is equal to its own transpose, i.e. $A$ is symmetric), then it can be shown that $$ \| A \|_{op} = \rho(A) = \max\{ |\lambda| : \text{$\lambda$ is an eigenvalue of $A$} \}. $$ Again Conrad has a pretty good explanation as to why this is true (specifically, see section 3, beginning on page 5).

If $A$ is not symmetric, then we need to do a bit more work. The basic idea (I think) is that $$ \rho(A) = \lim_{k\to\infty} \|A^k\|_{op}^{1/k}. $$ This is shown in Theorem 12 of the document that Andres Mejia suggested you look at. Since we have assumed that $\rho(A) < 1$, it follows that $$ \|A^k\|_{op}^{1/k} < 1 $$ for sufficiently large $k$. But this is possible only if $\|A^k\|_{op} < 1$. Now apply the above argument to $A^k$ instead of $A$. I am waving my hand a bit, but I think that it should be possible to fill in the details.