In normal computers, pointers in code "point" to memory. On a lower level, linking and loading turns those "pointers" into numerical codes which really do point to specific bytes (or bits) of computer memory. These bits are encoded in binary format, that is, in base 2 (0s and 1s).
In a quirky math class called Math 17 offered by my college, I learned about how, in Complex number space, knowing the behavior of the edge of a disk allows you to predict the behavior anywhere inside the disk. I'm not super strong on the logic of all this, but I think it is basically that on an open set, the edge/boundary of the set in Complex space behaves like a holomorphic function with which you can predict the zeros inside the set and thus the behavior of the numbers in the set. I'm sure this explanation is super inadequate and grates at the ears of real mathematicians, but hopefully you will know what I'm talking about.
Drawing from my extremely tenuous understanding of the rudiments of topology, might it be possible to allocate memory addresses in a computer in a way such that they could be modeled by some sort of disk in Complex number space, such that one could hold only the information pertaining to the edge of the set containing the data, forget the rest, and then use topological mathematics to predict the memory modeled within the disk? This way the computer would only have to hold onto memory long enough to build it into a surface, and then it could forget everything except for the boundary.
Of course it would require a lot of time to build this hypothetical complex disk. But once built, I imagine such a model could reduce the necessity of memory by an order of magnitude or more.
You might say that an analytic function's values on the boundary of a disk "encode" its values in the interior. There is a lot of redundancy in describing an analytic function by giving its values at every point (an uncountable collection of values!). An even more "economical" way to describe an analytic function is by a Taylor series: that's a countable collection of values.
A computer memory is a very different situation: a finite set of discrete values (0's and 1's). Although in particular situations there might be an encoding that preserves all the information and takes fewer bits, that certainly won't happen in general. With $n$ bits that can each independently take the values $0$ and $1$, there are $2^n$ possible configurations, and there is no way to encode this in less than $n$ bits.