Assume we have a programming language where the kind of code below is allowed. We can do much more with it than with Turing machines, and we can answer to any arithmetic problem.
But what is its main limitation, in term of what we are interested to do in maths ? Do you have an intuitive example of an interesting problem that we cannot express in this framework, justifying to use something else (such as ZFC) ?
For example, it is possible to "prove" in this framework that HALTS is not a computable function, by enumerating the finite programs, and test if it solves the halting problem by enumerating the programs again and comparing with the output of HALTS. So what is impossible to "prove" in this framework ?
function doSomething() {
if (HALTS(Goldbach) == false) { // if the Goldbach conjecture is true
...
}
else { ... }
}
function Goldbach() {
n = 6;
while(true) {
found = false;
for (k = 3; k < n/2; k+=2) if (isPrime(k) && isPrime(n-k)) { found = true; break; }
if (found == false) return false;
n += 2;
}
return true;
}
function isPrime(n) {
for (k = 2; k < n-1; k++) if (mod(n,k) == 0) return false;
return true;
}
If the calls to HALTS are only allowed to use programs or subroutines that do not themselves use the HALTS call, then this use is equivalent to having the halting problem as an oracle, and so the class of functions that you can compute this way will be the $0'$-computable functions, which are the same as the functions with $\Sigma^0_2$-definable graph. These are the same as the limits of computable functions $f(n)=\lim_{k\to\infty} f(k,n)$, where the limit means that the values of $f(k,n)$ eventually stabilize as $k$ increases.
But I think that you intend to allow that HALTS is applied to programs that itself use the HALTS operator. In this case, if arbitrary finite nesting is allowed, then you will get up to the $\omega$-jump $0^{(\omega)}$, which is equivalent to arithmetic truth. This is an extremely powerful level of complexity, basically exceeding most ordinary mathematics. You will be able to write programs that compute extemely fast-growing functions.
But actually, you will be able to go beyond that even, since you will be able to write a single program that computes arithmetic truth. For example, there is a program that takes an arbitrary assertion like $\forall x\exists y\forall z\cdots \varphi(x,y,z)$ in the language of arithmetic and then computes whether or not it is true. The HALTS capacity allows you to decide the quantifiers, by searching for an instance or a counterexample. But then, having computed arithmetic truth, you can continue to compute an oracle for the halting problem relative to that, computing $0^{(\omega+1)}$ and so on into the transfinite ordinals.
Eventually, however, you will discover issues with the semantic foundations of your HALTS operator. For example, consider the program that on input $q$, asks if program $q$ halts on program $q$, and if so, goes into an infinite loop, and if not, then halts immediately. (By "program" here, I mean a program using the HALTS operator.) If $p$ is the program performing the task I have just described, then $p$ halts on $p$ if and only if $p$ does not halt on $p$, which is a contradiction.
The previous argument is exactly Turing's proof that the halting problem for Turing machines is not Turing computable. But the proof works for virtually any concept of computability, including the one you have described.
The conclusion is that ultimately, there is no satisfactory definition for how the HALTS operator should behave, if it is supposed to answer correctly on programs that themselves use the HALTS operator.