The Church thesis states that "a function is computable iff it is computable by a Turing machine."
Similarly, I wonder if there exists some thesis that states that "a mathematical truth is provable iff it is provable in $F$" where $F$ is a well known formal system (for example ZFC) and "provable" means that we can understand the proof and be convinced that the proof proves something that is really true, it does not relies on some arbitrary axiom added to state an arbitrary truth.
For example, to my understanding, the halting problem solutions are mathematical truths, so all provable solutions of the halting problem would be provable in $F$. And we will be in principle able to understand those proofs.
So, is there some formal system which is today considered as consistent and forever the most complete possible to do understandable proofs ?
It's worth noting that for sufficiently simple statements, this is actually a theorem: we can prove that every true $\Sigma^0_1$ sentence is provable in the very weak theory Robinson arithmetic. Moreover, that proof itself goes through in a very weak theory (although not quite as weak as Robinson arithmetic). Jargonily, Robinson arithmetic is $\Sigma^0_1$-complete.
$\Sigma^0_1$ is a "syntactic complexity class" - a sentence is $\Sigma^0_1$ or not simply depending on what it looks like. Specifically, a $\Sigma^0_1$ sentence is any sentence in the language of arithmetic (that is, using $+$ and $\cdot$) of the form $$\exists x_1,...,x_k\varphi(x_1,...,x_k)$$ where $\varphi$ uses only bounded quantifiers (namely "$\forall y<z$" and "$\exists y<z$"). For example, positive instances of the halting problem are $\Sigma^0_1$, and so every Turing machine which halts can be proved to halt in $\mathsf{Q}$.
The class $\Sigma^0_1$ forms half of the first "layer" of a larger syntactic hierarchy, the arithmetical hierarchy. Godel's incompleteness theorem kicks in at the level of $\Pi^0_1$ sentences: for every "reasonable" theory $T$, there is a $\Pi^0_1$ sentence which $T$ neither proves nor disproves. So $\Sigma^0_1$-completeness is the best we can hope for in practice.
As to your question, I would say that (with $F=\mathsf{ZFC}$) this is a reasonably common position within the mathematical community but nowhere close to universal - and rarely held with confidence remotely approaching that of the general acceptance of Church's thesis. Certainly there's a nontrivial segment of the set theory community who staunchly believe that we will indeed later find "obviously true" axioms to add to $\mathsf{ZFC}$, and even that this will be a never-ending process.
So why the dramatic change of situation from computations to truths? Well, certainly one difference is the absence of robustness. In the context of computations, we have many different models of many different "flavors" which all yield the same notion. In the context of foundational theories, however, this doesn't play out in the same way: at the very least there is meaningful debate over whether the axiom of choice is really "obviously true," and as one dives into the subject further there are also trans-$\mathsf{ZFC}$ principles which arguably have some intuitive weight behind them - how can we rule their ideal axiomhood out with certainty? Finally, as a really big blow to robustness note that naive comprehension - which I really do think is "almost obviously true" - is outright inconsistent. Nothing like that has happened in the context of computability: there has been no proposed algorithm which it turns out later on can't be a part of any coherent computability theory, and indeed the coherence of ideas like oracle computability and higher recursion theory strongly suggests that that will never happen.
So the way we settle on a given foundational theory is just much more fragile than the way we settle on a given model of computation. I think this is a fundamental difference between the two topics.