Complex floating point types

577 Views Asked by At

I'm implementing my own programming language for fun, and have been playing around with the implementation of various floating point numeric types. I'd like to think about implementing a numeric type to represent complex numbers.

i) I've been reading around, and my understanding is that there is no equivalent of the IEEE-754 standard (which defines standard floating-point representations of real numbers) for complex numbers. Is that correct, or have I missed something?

ii) All programming languages that I'm aware of implement complex by storing a pair of (either single- or double-precision) floating-point numbers, which represent the Cartesian form of a complex number $x + iy$. Is anyone aware of a programming language that does things differently?

I'd like to consider implementing a numeric type that represents complex numbers in polar form $re^{i\theta}$, storing $r$ as an unsigned floating-point number and $\theta$ as an unsigned fixed-point number. This would be quite a non-standard thing to do (but remember I'm just doing my programming language for fun). As far as I'm aware, there are very few settings that use unsigned floating-point numbers, but it feels like this would be an appropriate use for them. I'd want $\theta$ to be stored as fixed-point in order to give an even spread around the origin.

Is anyone aware of any work that has been done on the numerical consequences of such a choice? For example: what is the relative importance, in a typical mathematical workflow, or addition vs multiplication of complex numbers (addition being easier in Cartesian form, multiplication being easier in polar form)? Are there any references that anyone could point me to please?

NB - please note that I'm not interested in the existence of any hardware support for these numeric types. Since I'm doing my language for fun, I'm happy to do things in software as a learning exercise.

Finally - does anyone have any suggestions about how a complex equivalent of Inf should behave?

3

There are 3 best solutions below

0
On BEST ANSWER

Regarding complex version of IEEE-754

It is true that no complex version of IEE754 exists, but the same design rules as for IEEE-754 can be applied for complex arithmetic. However, this is more subtle than it seems in the first place.

For example the direct cartesian implementation of the complex multiplication (i.e. $(a+ib)(c+id)$ does not work with $inf$. Consider for example $(\text{inf} + 0i)(\text{inf} + 0i)$. the correct result would obviously be $(\text{inf} + 0i)$. But the direct floating point evaluation will give $(\text{inf} +\text{NaN}i)$ which is not correct. So the implementation needs to take care of these corner cases to always generate a correct result.

Furthermore for special functions, it is even not always obvious what the correct behavior with respect to +0, -0, +inf, -inf and NaN should be. And for complex arithemtic you even have combinations of +0 + inf, +inf + nan in real and imaginary part and so on.

Some papers which may be useful are (Kahan is a very autoritative source on this topic)

You could of course decide to not have inf and/or NaN. But then your arithemtic is not algrebaically closed which means not all operations generate a result, but require some sort of exception handling.

Regarding the implementation

You are completely free how to implement the complex numbers, as real and imaginary pair or in polar form. Once the behavior of the complex arithmetic is defined (especially with repsect to the special float and complex float values, as discussed above) the implementation just has to give the correct results.

There are considerations such as speed, but that you know already. polar form is usually not used because addition is quite expensive and I believe there are also issues with accuracy. However, if this is not an issue and you only need to multiply and divide then polar form is the way to go I would say.

0
On

I have no references, but i'm actually coding in Julia these days, and the profiling tool allows me do dig the computational time up until functions Base.+() and Base.*(). For the computations i do (and this is not a generality), there are a lot more additions than multiplications in numbers, but a lot more multiplications than additions in computing time.

Furthermore, even if you find empirical studies about number of additions and multiplications in a given running code, it is quite unlikely that the setup deals with complex numbers...

To sum up, if you have applications in mind, Julia's profiler is a good way of checking the number of call to additions and multiplications of your algorithm.

0
On

A complex Inf has no need for a sign, but arithmetic should work similarly. Inf plus anything should be Inf, Inf multiplied by anything should be Inf unless multiplied by 0 in which case the result should be 0. I found a good resource for checking how various programming languages handle infinity here: https://rosettacode.org/wiki/Extreme_floating_point_values.

I've personally always found it easier to perform addition and subtraction in Cartesian form while doing multiplication and division in polar form. Addition and subtraction require some trigonometry in polar, though in Cartesian it is not that difficult to implement multiplication as $(a+ib)(c+id)=(ac-bd)+i(ad+bd)$. When doing division it gets more costly in Cartesian and converting to polar may be more efficient. You are right that most languages stores complex numbers in Cartesian, and that polar is also an option, but there is a third way that may be new to you using matrices for both addition and multiplication: https://math.libretexts.org/Bookshelves/Analysis/Book%3A_Complex_Variables_with_Applications_(Orloff)/01%3A_Complex_Algebra_and_the_Complex_Plane/1.14%3A_Representing_Complex_Multiplication_as_Matrix_Multiplication. Maybe you'll find something fun to use this with.

Unfortunately I've also been unable to find a complex number standard or references about the speed of various methods.