Why isn't math proofs just a computer trial and error?

172 Views Asked by At

I already asked a similar question, but I recently began a course of Logic and it gave me not an answeat but a refination of my question, which I redefine here.

My thinking is the following:

Suppose we have a limited set of $n$ symbols {$\forall$, $\neg$, ...} and some statements {$P$, $Q$, ..., $Z$} made with the symbols and assumed True, therefore Axioms.

Now suppose that we want to prove a statement $A$, that means, we want to start at the Axioms and arrive at $A$ using only a logic combination of our symbols.

My question is: why not set a computer to try every possible path?

To clarify, we want something like $P \land Q \land \text{...} \land Z \to \text{(?)} \to A$, so my question is: why not set a computer to change the $(?)$ to one of each of the $n$ symbols from {$\forall$, $\neg$, ...}, then if it doesn't work go for two symbols, then for three.

It is curious because as I am writing this, I realized that the Time Complexity of this proccess would be indeed complex. However, I stand my question, because for example if the problem is an important one like the Goldbach Conjecture, why not set a lot of computers to optimize the proccess, and if we will inevitably do it someday why not do it already, as we need to start sometime? Speaking of that, one thing I would also like to know is if Quantum Computing would help this cause I proposed, making Math just a matter of combinatorical computer trial and error.

My questioning is somewhat vague, but any glimpsy of why Math proofs can't work like that would be of great value.

1

There are 1 best solutions below

0
On BEST ANSWER

This absolutely works in theory. The problem is the number of computers needed ("a lot") is in practice "too many".

Computers are fast. Rugged fast. And yet, breaking into my social media accounts by guessing my password, 10+ characters long with letters and digits, requires at most $62^{10} \approx 10^{17}$ guesses, which at a rate of 1000 attempts per second would take 300 000 years. Getting a second computer wouldn't make a dent into that for any person alive. Getting a thousand computers wouldn't really matter either, we're still talking centuries.

A mathematical proof, written in first or second order logic is a lot longer than my social media password, and requires more time to verify whether a step was correct or not. A thousand or a million computers or more would cost a fortune, and would still not be likely to produce results the programmer of the computer will be alive to see. The investment into flesh-and-blood mathematicians would yield a much higher return on investment.

That said, computers absolutely are useful for assisting with well-defined parts of mathematical proofs, going back to the controversy regarding the proof of the Four color theorem in 1976.