I'm curious what the right abstraction is for equipping an arbitrary set with "something kind of like computability".
Topologies don't seem to fit because the complement of an open set is not open in general.
Sigma algebras don't seem to fit because computable functions on real numbers can't "zero in" on a particular real under our intuitive notion of computability. We can construct narrower and narrower sets containing the real in question, but can't isolate the real itself.
Most of the definitions of computability that I've seen before involve defining a subset of $\mathbb{N} \to \mathbb{N}$ or $(\mathbb{N})^n \to \mathbb{N}$. One constructs Turing Machines and describes how to feed data in and read it back out, or considers an initial pool of functions and closes them under certain operators.
There are a couple of properties that apply to functions in $\mathbb{N} \to \mathbb{N}$ that are vaguely computability-like.
- computability itself: corresponding to an always-halting Turing Machine / decider.
- computable in polynomial time.
- capable of being expressed in a particular total functional programming language.
Each of these properties, even though they apply to maps from $\mathbb{N}$ to itself, readily give us a notion of which subsets of $\mathbb{N}$ are computable, namely the preimages of $0_{\mathbb{N}}$ in any of the maps.
However, for certain things like the real numbers $\mathbb{R}$, I can think of two obvious ways to talk about computability, i.e. two candidates for $(\mathbb{R}, \sigma)$ .
1) Restrict our attention to the computable reals. Each of the sets in our collection of subsets $\sigma$ is a subset of the computable reals. Not every subset of the set of computable reals is in $\sigma$, however.
2) Consider "computably distinguishable" reals. Restrict our attention to the interval $[0,1]$, i.e. interpret infinite bitstrings as a real number on the unit interval. As our "basis" of sorts, consider numbers starting with the same finite sequence of 0s and 1s . Consider all finite sequences of finite bistrings. Associate each bitstring with the interval in $[0,1]$ that it corresponds to. By taking reciprocals and using negation, we can map portions of $\mathbb{R}$ back to $[0,1]$ . We then say that our computable subsets in $\mathbb{R}$ have to be computable in $[0,1]$ when $\lambda x . -x$ or $\lambda x . 1/x$ is applied and the result is intersected with $[0,1]$
Is there a kind of additional structure that we can endow a set with to "fix" our notion of computability up front?
The general approach to get a computability theory ultimately founded on Turing machines to work for many/most structures of interest is the theory of represented spaces on which eg computable analysis is built.
By considering eg Turing functionals, or Type-2 Turing machines (which do not halt, but instead keeping writing more and more on a write-once only output tape) we know when a partial function $F : \subseteq 2^\mathbb{N} \to 2^\mathbb{N}$ is computable. We define a represented space to be $(X,\delta)$ with a set $X$ and a partial surjection $\delta : \subseteq 2^\mathbb{N} \to X$. The idea is that we use elements of $\operatorname{dom}(\delta)$ as names to denote objects in the set $X$. An element of $X$ is computable (as a point) iff it has a computable name.
A computable function between represented spaces $(X,\delta_X)$, $(Y,\delta_Y)$ is a function $f : X \to Y$ such that there exists a computable partial function $F : \subseteq 2^\mathbb{N} \to 2^\mathbb{N}$ with $f(\delta_X(p)) = \delta_Y(F(p))$ for all $p \in \operatorname{dom}(p)$.
Of course, we need to figure out what representations are suitable to describe objects we already know, such as $\mathbb{R}$. A "famous" mistake here was made by Turing, who originally proposed to use the binary expansion of reals to construct a representation. But this is very unsatisfactory, it would make multiplication by 3 non-computable. Instead, a good representation of $\mathbb{R}$ can be obtained by coding sequences of rationals converging with speed $2^{-t}$. This representation makes every "natural" continuous function computable.
It turns out that topology actually is closely linked to computability. Open sets correspond to semi-decidable properties (ie we can confirm if they hold, but not necessarily reject if they fail), and continuous functions correspond to computable functions.
For further reading, I refer to the textbook Computable Analysis by Klaus Weihrauch, my article On the topological aspects of the theory of represented spaces doi arXiv and the very recent Handbook of Computability and Complexity in Analysis Springer.