A proof I've seen on reductions for $\mathsf{NP}$-hard problems relies on evaluating the complexity of an algorithm computing a function which is defined recursively in the structure of formulas of propositional logic.
More precisely, we consider the formal language of classical propositional logic, say $\mathcal L$, defined over a set of variables $Var=\{p_0,p_1,\dots\}$ as the minimal set over the following rules:
- $Var\subseteq\mathcal L$
- If $\phi,\psi\in\mathcal L$, then $\phi\lor\psi,\phi\land\psi,\neg\phi\in\mathcal L$
I now want to consider a translation function $t:\mathcal L\to\mathcal L$ which is specified by giving values to all elements of $Var$ and which is extended to $\mathcal L$ by distributing over the boolean connectives, i.e.
$$t(\phi\lor\psi)=t(\phi)\lor t(\psi)\\t(\phi\land\psi)=t(\phi)\land t(\psi)\\t(\neg\phi)=\neg t(\phi)$$
Now, I can see how this function is computable (provided that we can compute the values for the propositional variables but this is assumed for now) and I can also envision a trivial recursive algorithm for it, e.g.
Algorithm 1. Eval-t, Input: A formula phi
1 If phi is in Var, return t(phi) //whatever this is
2 If phi is of the form psi v chi, return Eval-t(psi) v Eval-t(chi)
3 If phi is of the form psi & chi, return Eval-t(psi) & Eval-t(chi)
4 If phi is of the form - psi, return - Eval-t(psi)
However I am not sure of how to evaluate the complexity of the algorithm. I wonder in general of how to treat computable functions like this formally in terms of complexity.
While you have a recursive algorithm, in this specific case I think it's easier to write an iterative algorithm that operates on the formula as a string. The algorithm just reads the string left-to-right, and performs substitution: whenever a variable $v$ is seen, compute $t(v)$; otherwise, output the symbol such as (, ), $\wedge$ etc. unchanged. Therefore, the complexity of the algorithm is linear in the size of the formula + the number of variables in the formula $\cdot$ the complexity of computing $t(v)$.
What about the original recursive algorithm? Turing machines don't have native support for recursion; it needs to be converted to explicit stack handling. In this case, whenever the algorithm is operating on a subformula starting at index $i$ and ending at index $j$, we could be putting indices $i,j$ on the tape. Or, we could be copying the entire subformula ($j-i+1$ characters) on the tape. Or, we could parse the formula to a tree first, and maintain and navigate a pointer structure. All of those approaches are polynomial time, since they can be expressed as a program with a finite nesting of bounded loops.
Many recursive algorithms perform a fixed number of recursive calls each step, and then combine their results. For example, the merge sort algorithm is calling itself twice on inputs of size $n/2$ and then combining the results in linear time. This is a recurrence relation $T(n)=2T(n/2)+O(n)$. The master theorem is a general principle that allows to determine complexity in many such cases.
Formally, complexity is defined by the number of steps taken by a Turing machine. This requires checking low-level details - are numbers written in binary? How multiple inputs are encoded into one long string? How does the machine parse and navigate around in this string? Are invalid inputs (for example, mismatched parentheses) rejected? And so on. Usually, those issues are not very interesting; once you are convinced they can be done, you can often skip over them and think in terms of a high-level programming language.