Software parallelization from an algebraic point of view

42 Views Asked by At

I am a programmer with a powerful interest in mathematics (moreso since I read Elements of Programming and From Mathematics to Generic Programming, by Alexander Stepanov), and as such I have been learning a little bit of abstract algebra and wondering of its applications to software design.

To my knowledge, there are languages like Haskell and Erlang that put a huge emphasis on algebra and exploiting it for paralelization (through monoids).

For example, in a toy program I am developing for fun that runs a Miller Rabin test on integers, the obvious parallelization is to run each test in parallel, then reduce the boolean results using an AND. This is paralelizable because the predicate simple_miller_rabin : Integer -> Bool maps integers to booleans, and the structure $(\mathbb{B},\wedge)$ is a monoid. Hence I can "prove" that my code is paralelizable because I have found a monoid.

I am interested in exploring this topic more deeply, and ask you if there are papers or books I could read on it. Specifically, I want to learn about the application of algebra to the analyisis of code and finding parts that are paralellizable and why (the "why" here being "because this code is isomorph to X algebraic structure", including which structures are paralelizable and such). I hope I am making sense at all...

Note: I am not a computer scientist (as a matter of fact I majored in Mechanical Engineering, which has a strong emphasis on Analysis, not Algebra), I just happened to end working on the tech industry and wanting to become a very formal programmer. So maybe this is something pretty basic that every CS major learns.