Examples of results failing in higher dimensions

3.2k Views Asked by At

A number of economists do not appreciate rigor in their usage of mathematics and I find it very discouraging.

One of the examples of rigor-lacking approach are proofs done via graphs or pictures without formalizing the reasoning. I would like thus to come up with a few examples of theorems (or other important results) which may be true in low dimensions (and are pretty intuitive graphically) but fail in higher dimensions.

By the way, these examples are directed towards people who do not have a strong mathematical background (some linear algebra and calculus), so avoiding technical statements would be appreciated.

Jordan-Schoenflies theorem could be such an example (though most economists are unfamiliar with the notion of a homeomorphism). Could you point me to any others?

Thanks.

11

There are 11 best solutions below

9
On BEST ANSWER

Here's an example that doesn't require too much mathematical knowledge, and the low-dimensional result is intuitive graphically:

We know that if a differentiable function $ f : \mathbb{R} \to \mathbb{R} $ has only one stationary point, which is a local minimum, then it must be a global minimum (this is intuitively obvious, and can be proved using Rolle's theorem). However, this result does not generalise to higher dimensions. An example would be $f : \mathbb{R}^2 \to \mathbb{R} $ with $ f(x,y) = x^2 + y^2(1-x)^3 $. This function has a unique stationary point at $ (0,0) $, which is a local minimum but not a global minimum (this can be seen by considering $ x >> 1 $). (Interactive 3D plot)

12
On

Simple symmetric random walks in 1 and 2 dimensions return to the origin infinitely many times, but not in 3 and higher dimensions.

That is, if you are on a number line (or coordinate plane) and repeatedly flip a coin to determine whether to take one step in the positive direction or one step in the negative direction (or do something to randomly choose 1 step in the positive or negative x or y direction), the probability that you'll get back to the origin is 1. If you do the same in 3 dimensions (or higher), where you're choosing randomly between 6 (or more) directions for taking 1 step, the probability of returning to the origin is less than 1.

edit: The probability $p(d)$ of a $d$-dimensional simple symmetric random walk returning to the origin is apparently called Pólya's Random Walk Constant. $p(1)=p(2)=1$, but $p(3)\approx 0.34$, $p(4)\approx 0.19$, $p(5)\approx 0.14$, $p(6)\approx 0.10$, $p(7)\approx 0.09$, and $p(8)\approx 0.07$ (from the linked MathWorld article).

2
On

"A non constant polynomial over $\mathbb R$ (or $\mathbb Q$, or an arbitrary field, according to your colleague's sophistication) which has no zero is irreducible". This is true in degrees $\leq3$ but false for higher degrees. Surprise your economist by asking if $x^4+4$ is irreducible and after a probably positive answer, calmly write down $$x^4+4=(x^2+2x+2)(x^2-2x+2)$$

[This is not about dimension but maybe it is in the spirit of your question: that lack of rigor can lead to mistakes, even if an assertion is true for some low integers]

0
On

I think extending the real plane to the complex plane provides many, many surprising results.

Firstly, if a function is once differentiable in the complex plane, it is infinitely differentiable (these functions are infinitely smooth).

Building upon that, if you have a holomorphic function that is bounded in $\mathbb{C}$, it is a constant. This is Liouville's Theorem.

(Think of a smooth function like sine that is bounded in the reals but is certainly not a constant!)

0
On

A famous example in mathematical physics is the Ising model. This was invented by Wilhelm Lenz, who gave it as a thesis topic to his student Ernst Ising. Ising solved the problem in one dimension (which is rather easy), finding that there is no phase transition, and incorrectly concluded from this that there is no phase transition in three dimensions. In fact the model does have a phase transition in two or more dimensions, which is what makes it interesting.

3
On

It is intuitively obvious that a continuous function should be differentiable everywhere except at a countable number of "kink" points!!!

0
On

May be this one. Every polygon has a triangulation but not all polyhedra can be tetrahedralized (Schönhardt polyhedron)

5
On

Chaos cannot exist in one- or two-dimensional continuous dynamical systems. Among other things, this means that in one or two dimensions if you supply similar inputs you will get similar outputs.

However, in 3 or more dimensions the dynamics can be chaotic, meaning that similar inputs do not necessarily lead to similar outputs (you can witness this in the Lorenz equations, a crude model for atmospheric dynamics).

1
On

The Riemann mapping theorem only works for $\mathbb{C}^1.$ Also (alas), the only conformal automorphisms of open subsets of $R^n$ are Möbius transformations, when $n > 2.$

Another nice example (not a graphical example) is whether $x^n - 1$ always factors (over $\mathbb{Z}$) into polynomials whose coefficients are either -1, 0, or 1. The answer is no, but the first counterexample is in the early 100s.

Also, in Conway and Guy's Book of Numbers, the devious duo trick the reader into thinking that the maximum number of regions into which the plane can be divided by $n$ straight lines is $2^n.$

These two aren't examples of theorems that don't hold in higher dimensions, but they do demonstrate that tenable hypotheses still need to be proven.

Richard Guy has a nice article with examples like this:

"The strong law of large numbers," The American Mathematical Monthly 95.8 (1988): 697-712

As you can see, these might not be particularly motivational for economists. Perhaps the more important part of rigor to economists would be remembering the assumptions that theorems depend upon?

2
On

Knots exist only in three-dimensional space.

0
On

The opposite direction is also true. There are nice things that happen in higher dimension that do not happen in lower ones. For example, the complex function $f(z) = 1/z$ has a singularity at $0$ that cannot be removed (the limit of $f$ as $z$ goes to zero does not exist). from Hartog´s lemma (see wikipedia), one can always remove the isolated singularities of a complex differentiable funcion in two or more variables. In particular, the limits of these functions as they approach to the singular points always exist.