How are weakly universal Turing machines actually defined?

734 Views Asked by At

For what I know, the definition of a universal Turing machine is something along the lines of the following (of course, details might vary from source to source):

A Turing machine $M$ is called universal if there are computable functions $f:\Bbb N^2\rightarrow\Sigma^*$ and $g:\Sigma^*\rightarrow\Bbb N$ such that for any $e,n\in\Bbb N$, $\varphi_e(n)$ is equal to $g$ applied to output of $M$ on input $f(e,n)$ (here I use (in a non-standard way) $\Sigma^*$ to denote possible tape contents with finitely many non-blanks).

(basicaly the idea is that we can compute any computable function by encoding the input, letting $M$ run on it and then decoding the output)

However, there are many machines which are "weakly universal", for example some machines which simulate Wolfram's Rule 110 automaton. The "weakness" here refers to the fact these machines require one to introduce some "background" into the encoding, usually periodic, but sometimes not (e.g. for the (2,3)-TM).

My question is, what does that actually mean? These notions are usually understood to be intuitive by the community, but I have never managed to find a (semi)formal definition of universality which would apply to them. Indeed, most sources that call machines like Wolfram's (2,5)-TM (weakly) universal define universality just as "able to simulate any Turing machine", whatever that is supposed to mean. I have decided to make my question specific as follows:

Under what (formal) definition of universality is Wolfram and Cook's (2,5)-TM (I couldn't find a better reference) universal? How about Wolfram's (2,3)-TM?

I suspect this might have been asked before, but I couldn't find anything by searching. (Edit: and 4 years later I still am unable to locate one!)