how can we write abstract algorithms?

1.1k Views Asked by At

Writing pseudo-code for algorithms is common practice in the applied mathematics literature. It is also often the case that the ideal input of an algorithm is an infinite set, for example it could be a positive-dimensional algebraic variety over $\mathbb{C}$. However practical algorithms only take as input a finite sample of this infinite ideal set. As another example, image processing algorithms take as input the pixels of a digital image, however the ideal input is a continuous object, i.e. the actual physical scene that the digital camera is attempting to capture.

So, it might be of interest, in order to understand the theoretical properties of such algorithms, to develop abstract versions of them, in the sense that their inputs and outputs are theoretical objects, not necessarily subject to computation. These algorithms might also require operations that again might not be computable, for example, we might have statements of the form "$while \, \, \, A \neq B \, \, \, do \, \, \, A \gets A\cap B$", where both $A,B$ are infinite sets. Notice that in general, if one is given $A$ and $B$, it is not possible to check the validity of the condition $A \neq B$. In fact, in general, one can not even receive either $A$ and $B$, because they are infinite!

So my question is: is it scientifically correct to write abstract algorithms in the above sense? Have such algorithms appeared in the literature? If not, then how can one abstract an algorithmic process that operates mathematically on abstract, non-tangible objects?

3

There are 3 best solutions below

5
On BEST ANSWER

One way to write down algorithms that work with infinite/non-computable objects is to assume that you some kind of "oracle" that can perform the required computation. For your example, you could assume that you have some "magic" oracle function that checks whether $A \neq B$ that can somehow be invoked by your algorithm.

For more details, see, e.g., http://en.wikipedia.org/wiki/Oracle_machine.

1
On

Newton's algorithm finds a root of an equation in terms of abstract computation, but the initial conditions might be not representable well as a floating point number, and the resulting computations for subsequent approximations might not be representable well as a floating point number. So this might be an example of interest to you. Newton's method could potentially fail if the numeric precision in arithmetic calculations isn't "arbitrarily large". But it works with exact arithmetic, under mild assumptions about the function and starting guess.

2
On

The analysis of algorithms began over 2,000 years ago with the kind of abstract algorithm you describe. Here is Proposition X.2 of Euclid's elements (with acknowledgments to the translation you can find at http://farside.ph.utexas.edu/euclid/Elements.pdf):

If the remainder of two unequal magnitudes never measures the magnitude before it, when the lesser magnitude is continually subtracted in turn from the greater, then the original magnitudes will be incommensurable.

In modern terminology, the "magnitudes" are positive real numbers. What Euclid is describing is a process whereby you take two positive real numbers $A$ and $B$ say and repeatedly subtract the smaller from the larger until they become equal. The conclusion is that $A$ and $B$ are incommensurable, i.e., $A/B$ is irrational, if this process never terminates. The termination test ($A \not= B$) for this algorithm is not computable.