We have numerous logical systems. For example PA is a (sort of) formal definition of number theory. PA can (I guess) be seen as a fragment of ZFC, which makes it weaker than ZFC. Further there seem to be numerous fragments of PA which are, of course, strictly weaker than PA. This shows there exists an ordering between the systems wrt. to their expressiveness.
In computation theory we usually deploy a Turing-complete model and restrict that to get less expressive "fragements". For example, we can put time constraints and deal with total polynomial time Turing machines, or we can define a programming language that quarantees halting of the programs written with it and, hence, it falls short in computational power from Turing machines. This shows we have some kind of ordering here, too.
Now, these two concepts seem to be closely related but I don't understand how. Is it possible to, say, use PA to define a programming language that would be a fragment of Turing-complete language? Or could we use Turing-complete model to define a formal logic that would be as expressive as one could ever be (I guess not)? Or are logic and computation after all incomparable but they just happen to interwind?
I'm aware of Curry-Howard correspondence, but that seems to just deepen my confusion on the matter. I'm also aware that some logics coincide with computational complexity classes and I have studied some finite model theory, but that hasn't really helped to understand the big picture.
I know the question goes a bit vague and I would be perfectly happy with pointers to literature that would explain what is going on.
One way find a link between computation and formal theories is to look at so-called provably computable functions (provably recursive functions). These are the total computable functions which can be proved to be total in a particular theory. One of the main motivating problems in proof theory has been to characterize this set of functions for various theories such as Peano Arithmetic.
This does give us a hierarchy of subclasses of the total computable functions. It is known that not every total computable function is provable total in PA; there are precise characterizations in proof theory of the set of provably total functions of PA. On the other hand, if we weaken the induction scheme in PA to only include induction for $\Sigma^0_1$ formulas, the resulting has has exactly the primitive recursive functions as its set of provably computable functions.
There are theories much weaker than PA, known as theories of bounded arithmetic, whose provably computable functions are related to computational complexity classes, although not in a way I can summarize easily with my limited knowledge of bounded arithmetics.
Unfortunately, I don't think there is any undergraduate-level literature on these subjects, which means that the only sources are graduate and professional level proof theory and computer science writing. That makes it a difficult area to learn on your own, if you don't have a deep background in logic. The Handbook of Proof Theory has an article "Hierarchies of Provably Recursive Functions" by Stanley S. Wainer could be a starting point.