What are some of the consistency strengths (say, in terms of proof-theoretic ordinals) of the full axiomatizations of the most commonly used theories in physics, for those theories that have been fully axiomatized, such as General Relativity, the Standard Model of QM and QFT, etc.?
Is there any guess, roughly, as to what level of strength would be required, at minimum, for a theory of everything? This is assuming such a thing is eventually possible and ends up not very far off from "quantum gravity."
My motivation for this question is both a) just curiosity about how formalized theories of physics compare to other well-known theories, strength-wise, and b) philosophical (if one believes the universe IS a mathematical structure, or requires a theory describing it to be consistent to exist, this would be, to me, a "reason" to believe in the consistency of theories up to a certain strength.)
Sincerely sorry if this question is too soft for here or would be better on a different stackexchange site, but I couldn't find a better one and couldn't find any information on this question.
I think the main issue is whether you want to talk about mathematical theories of physics in the normal manner, e.g. using the spectral theorem for arbitrary self-adjoint operators on Hilbert spaces, or whether you are willing to 'trim down' these theories to algorithms that give approximate predictions, of arbitrary accuracy, for what's observed in specific experiments.
In the former case, where we do mathematically rigorous physics in the ordinary style, we're superficially using ZFC — but we can then work to see if weaker foundations will suffice. For example, analysts have spent a lot of time working to restate theorems without the need for the Axiom of Choice, and I feel sure physicists could do without it if they wanted.
In the latter case, where we try to take specific theories of physics and strip them down to algorithms for computing approximate predictions given approximate initial data, these algorithms can be expressed in terms of arithmetic (since a rational number can be specified by a pair of integers, and so on), and one can argue that anything beyond recursive functions $f : \mathbb{N} \to \mathbb{N}$ is not really needed. There's an extensive theory of 'computable analysis' devoted to these questions:
and in my undergrad thesis I proved that multiparticle quantum mechanics with Coulomb forces was computable in this sense:
But that's about functions being computable, not theorems being provable. It would be nice if people took theorems about the observable consequences of physical theories and tried to prove them using weak foundations. It's quite possible that something as weak as PRA — primitive recursive arithmetic — would suffice! That has proof-theoretic ordinal $\omega^\omega$. As you probably know, Harvey Friedman has conjectured that Fermat's Last Theorem can be done proved in elementary function arithmetic, which has proof-theoretic ordinal just $\omega^3$. So far Colin McLarty has shown that finite order arithmetic will suffice:
A similar study for mathematical physics would be interesting, but I'm not aware of it.
After writing the above, I saw this comment by David Roberts on G+:
He's referring to this: