Why problem with simple formulation is so hard?

248 Views Asked by At

If you ever heard about Collatz conjecture, you know that it is understandable even for middle school students, but no one has solved it yet. The problem is to prove or to disprove that starting with any natural number $a_0$ and going $a_{n+1} = f(a_n)$, eventually you will always get $a_m = 1$ for some $m$.

$$ f(n)=\begin{cases}n/2&{\text{if }}n\equiv 0{\pmod {2}}\\(3n+1)/2&{\text{if }}n\equiv 1{\pmod {2}}.\end{cases} $$

There a lot of other "simple" problems in mathematics that are still unsolved. For example, Goldbach's conjecture that states:

Every even integer greater than 2 can be expressed as the sum of two primes.

Do you have any intuitive explanation as to why modern advanced mathematics cannot solve such seemingly simple problems?

2

There are 2 best solutions below

3
On

Simple to state, could also mean too many trivial conditions to easily hit a contradiction. That makes it hard to disprove, but combined with abnormal use of defined objects, it makes it nearly impossible until we find more theory.

You can change a conjecture into collaries, or necessary conditions, etc. , but that doesn't easily let us induce it necessarily.

A necessary condition in itself, is not usually a sufficient condition. It takes a big enough set of them to ensnare the problem.

One condition on Collatz is that any nontrivial cycle has a lowest element, with an odd number of prime factors of form $4k+3$ when counted with multiplicity if odd.

You can partially induce Goldbach, via Collatz map and https://oeis.org/A158709 but only partially. Even if the sequence is infinite.

Our two main proof types currently seem implausible to reach a conclusion.

0
On

Simple problems may be hard to solve because they are really hard problems underneath the surface. Let me explain a little bit of why I think this is so. The reason it may look simple is the way the problem is stated in words and simple arithmetics in the first place. Our traditional understanding of simple number theory and counting like adding, multiplying and dividing numbers we learn very early and is part of some human culture.

The problem of stating problems this simple way is that the underlying mechanics of the problem is hidden from the observer. Numbers in base $10$ may look simple because we are used to count and we learn simple rules. However when we iterate those rules they become more and more complex as the numbers change. The $3n+1$ problem is a very good example of this. If you study the complexity in base $2$ you start to see some features that is difficult to see in base $10$, you could also find simple rules that produce the complexity that we see. See ref. The 3x+1 Problem: A Quasi Cellular Automaton by Thomas Cloney, Eric Goles and Gérard Y. Vichniac.

A quote from the paper:

Base 2 is a natural choice because of the control role of the parity of each iterate under eqn.(1) and because it lends itself to detailed high-resolution graphic display. The bit representation displays the "insides" of the iterates, in contrast with number theory approaches that focus only on iterate size, a more limited form of information represented by the envelope of the binary evolution.

If you're not familiar with Cellular Automata, I suggest reading a few tutorials or explanations from Stephen Wolfram or if you have access to the book NKS - before reading this paper. I suggest learning or get an intuition on how simple Automaton rules produce complex output then you can start to connect the dots, atleast get an idea or a sense of what might be going on.

Cloney, Thomas; Goles, Eric; Vichniac, Gérard Y., The (3x+1) problem: A quasi cellular automaton, Complex Syst. 1, No. 2, 349-360 (1987). ZBL0662.10010.