How to generalize this version of Tarski’s Fixed Point Theorem?

273 Views Asked by At

I could prove the following result from my Real Analysis course:

Let $f:[0,1] \rightarrow [0,1]$ be an increasing mapping. Then it has a fixed point.

I understand that this is a very baby version of Tarski’s Fixed Point Theorem. Now, I wish to generalize this a little bit and get the following:

Let $f:[0,1]^n \rightarrow [0,1]^n$ in which $f$ is increasing in the sense that if $y \geq x$ coordinate wise then $f(y) \geq f(x)$ coordinate wise. Then, f has a fixed point.

From my point of view, we could just pick a point $x_0 \in [0,1]^n$, fix all coordinates but one and apply the above lemma to that coordinate. Then, when the first coordinate of the fixed point is found, we do the same for the second and so on.

However, I am not sure this route would be successful and even if it is, I can’t write the extension formally. Any ideas? Thanks a lot in advance!

3

There are 3 best solutions below

0
On BEST ANSWER

The "baby version" is not sufficient to obtain your result for $n > 1$ as a corollary. This is possible only in case that $f : [0,1]^n \to [0,1]^n$ has the special form $$f(x_1,\dots,x_n) = (\phi_1(x_1),\dots,\phi_n(x_n))$$ with continuous increasing $\phi_i : [0,1] \to [0,1]$. But in general $f$ has the form $$f(x_1,\dots,x_n) = (f_1(x_1,\dots,x_n),\dots,f_n(x_1,\dots,x_n))$$ where each $f_i :[0,1]^n \to [0,1]$ depends on all variables.

In fact, your result is true, but you will need a new proof. See Asaf Karagila's answer.

Let me close with a remark concerning your strategy to find a fixed point. It is not expedient to pick some $x_0 =(x^0_1,\dots,x^0_n)$ and to consider the functions obatined by fixing all but one coordinate. Let us see what happens for $n=2$. You consider the function $f^1 : [0,1] \to [0,1], f^1(x) = f_1(x,x^0_2)$ and conclude that it has a fixed point $\xi_1$, i.e. $f_1(\xi_1,x^0_2) = \xi_1$. Next you consider $f^2 : [0,1] \to [0,1], f^2(x) = f_1(x^0_1,x)$ and conclude that it has a fixed point $\xi_2$, i.e. $f_2(x^0_1,\xi_2) = \xi_2$. But there is no reason why you should have $f(\xi_1,\xi_2) = (\xi_1,\xi_2)$.

0
On

The Knaster–Tarski fixed point theorem states that:

Suppose that $(L,\leq)$ is a complete lattice and $f\colon L\to L$ is increasing, i.e. $x\leq y\implies f(x)\subseteq f(y)$. Then $\{x\in L\mid f(x)=x\}$ is a complete lattice under the induced order.

Note that a complete sublattice cannot be empty, since $\varnothing$ must have a supremum and infimum (which make the bottom and top elements of the lattice).


Note that $[0,1]$ is a complete lattice, and $[0,1]^n$ is also a complete lattice. Granted, now, most uses of this theorem really just care that there is a fixed point. This is commonly used in the proof of the Cantor–Bernstein theorem when we want to find a set closed under some function.

Just finding a fixed point is easy. Let $D=\{x\in L\mid x\geq f(x)\}$ and let $s=\inf D$. Even if $D$ is empty, it still has a infimum, it would just be the top element of $L$. This is because $L$ is a complete lattice.

Now, if $s<x$, then $f(s)\leq f(x)$. Since this holds for all $x\in D$, it follows that $f(s)\leq s$, by the virtue of being an infimum. Therefore $s\in D$, and it is in fact a minimum. But now we have that $f^2(s)\leq f(s)\leq s$. So $f(s)\in D$ as well, which means that $s\leq f(s)$—again by the virtue of being an infimum—and therefore $f(s)=s$. We can even show that $s$ is the smallest fixed point, i.e. the bottom element of the fixed points lattice.


If we assume continuity of $f$, in the sense that it respects the $\sup$ operator, we get an even easier proof of what is known as the Kleene fixed points theorem.

Let's use $0$ to denote the bottom element of $L$. Then $0\leq f(0)\leq f^2(0)\leq ...$, let $x=\sup\{f^n(0)\mid n\in\Bbb N\}$, which exists since $L$ is a complete lattice. Since $f$ is continuous we get that $$f(x)=f(\sup\{f^n(0)\mid n\in\Bbb N\}) = \sup\{f(f^n(0)\mid n\in\Bbb N\}=\sup\{f^{n+1}(0)\mid n\in\Bbb N\}=x.$$


Finally, we apply these proofs to your question. And you can see that this is essentially the same proof for both $[0,1]$ and $[0,1]^n$. And in fact, much much more.

0
On

I want to add another perspective to the two answers already given. In particular, OP's initial idea can actually work when applied correctly. Of course, the resulting proof is more cumbersome than the one by Asaf, but I think it is still nice to point out that this idea works.

For simplicity, I will limit this explanation to the two-dimensional case. Let us start with an increasing function $f : [\ell_1, r_1] \times [\ell_2, r_2] \rightarrow [\ell_1, r_1] \times [\ell_2, r_2]$ that maps $(x, y)$ to $f(x, y) = (f_1(x, y), f_2(x, y))$. For fixed $\hat{x}$, define the function $g_\hat{x} : [\ell, r] \rightarrow [\ell, r]$ that maps $y$ to $f_2(\hat{x}, y)$. Notice that $g_\hat{x}$ is increasing and hence has a fixpoint $y^*$. Now consider the point $(\hat{x}, y^*)$. We either have $f(\hat{x}, y^*) \geq (\hat{x}, y^*)$ or $f(\hat{x}, y^*) \leq (\hat{x}, y^*)$. In the first case, the function $f$ restricted to $[\hat{x}, r_1] \times [\ell_2, r_2]$ is still increasing and maps to the domain to itself. In the second case, we restrict the function to the domain $[\ell_1, \hat{x}] \times [\ell_2, r_2]$ instead. In either case, we get a smaller domain (if we choose $\hat{x} = \frac{1}{2}(r_1 + \ell_1)$ the size of the domain is halved). Using the same argument recursively, we get smaller and smaller domains. If we do it in a smart way (i.e. fixing a different coordinate in each recursion) this sequence of domains will converge to a single point which is guaranteed to be a fixpoint. To see this, observe that we always maintain $f(\ell_1, \ell_2) \geq (\ell_1, \ell_2)$ and $f(r_1, r_2) \leq (r_1, r_2)$.

Note that this essentially gives you a nice algorithm for approximating such a fixpoint. In the discrete setting, this has been studied e.g. here: https://web.stanford.edu/~yyye/unitarski1.pdf.