My question is very general, and the kind of answer I look for would be as low level as it could be. I think I may illustrate my query more succinctly with an example.
In propositional logic, you have the modus ponens rule which takes two hypothesis and asserts a conclusion. The hypothesis are stated as "static" pieces of code -they are just expressions, like the axioms of any theory; the same applies to the (detached) assertion. And finally, the rule itself is also a piece of code -and "static" in the sense loosely stated above. In essence, a computation using the modus ponens rule develops this way:
$\text{1) First Hypothesis: }\vdash {P}\ \\\text{2) Second Hypothesis: } \vdash P \rightarrow Q \\ \text{3) Rule Modus Ponens: }\vdash \{ \vdash P\vdash P\rightarrow Q \vdash Q\}\\ \text{4) Assertion: }\vdash {Q}$
In the example above I used the turnstile symbol to denote the notion of "provable" or "given".
Even though every step makes up a piece of code, something which could be equated to a string of bits in a memory tape ("static"), clearly between steps 3) and 4) the "detaching" of the assertion takes place. This action of detaching -the computation- appears as an irreducibly "dynamical" process, something which even if described by the four static written steps, constitutes a concept that seems not to be actually "captured" by them.
Extrapolating this sort of overly simplistic example, I feel troubled with the task of reconciliating both of the ideas of computation-as-an-object (for example, as a list of instructions to perform a given procedure, i.e. a computer program) and the notion of computation-as-an-action (the dynamical act of producing and spitting out the output, from the input data and the procedural rules).
Is there something I'm missing, misinterpreting, or plainly wrongly stating here?
This is a bit off the cuff, so hopefully it won't get to wobbly.
The first step of defining computation is to fix a model. This is essentially what Church and Turing did with the $\lambda$-calculus and Turing Machines respectively. Being a certain flavour of computer scientist, I tend to think in terms of Turing Machines, so I'll reference them, but you can really stick any model of computation in (and there's no need to stick Church-Turing thesis in particular, i.e. we don't need at this point to know what truly encompasses all computation, just that there is some thing that does).
A Turing Machine (TM) is a collection of states with a transition function that describes what happens when certain input is encountered in each given state. It may or may not halt, it may be deterministic or not etc. Now I think the key point is that the TM is itself the computation. The description of what the TM does isn't, it's just a description.
Relating this to a computer and a program, the computer is (roughly) a universal TM, the program, where it able to run without the computer would be a TM, but when run on a computer is just input. It isn't the computation, the computation is the computer producing the output from the input.
In your examples, the list you give (depending on your perspective) is really a description of the "current" state of the computation at 4 given points. The computation is the process that with input passes through those states.