Both the "square and multiply" and the "double and add" algorithms are optimizations for what can be naively computed using a series of binary operations, on some initial value $p$.
In the case "square and multiply", it is an optimization for raising any number $p$ to a natural number power $n$, giving us $p^n$. The result $p^n$ can naively be computed by multiplying $p$, $n - 1$ number of times.
So:
$$ p\cdot p = r_1\\ r_1 \cdot p = r_2\\ r_2 \cdot p = r_3\\ \vdots\\ r_{n-2} \cdot p = r_{n-1} $$
But, in the event that $n = 2^m$ ($m$ being a natural number), we can easily optimize the above operation to require only $m$ operations instead of $2^m$ operations, by squaring the previous result.
So:
$$ p \cdot p = p^2\\ p^2 \cdot p^2 = p^4\\ p^4 \cdot p^4 = p^8\\ \vdots\\ p^{2^{m/2}} \cdot p^{2^{m/2}} = p^{2^m} $$
But in the event that $n$ is not a power of $2$, then we employ the square and multiply algorithm. That is, to take the binary representation of $n$, and applying the following algorithm:
- read the digits from left to right
- for the first "1", just take $p$
- for all subsequent digits, for every "1", you square the last result and multiply by $p$. For every "0", you simply square
For example, let's say we wanted to multiply $p$ by $11$. The binary representation is $1011$. So, applying the algorithm, we get:
| Digit | Operation | Result |
|---|---|---|
| $1$ | $p$ | $p$ |
| $0$ | $p\cdot p$ | $p^2$ |
| $1$ | $p^2 \cdot p^2 \cdot p$ | $p^5$ |
| $1$ | $p^5 \cdot p^5 \cdot p$ | $p^{11}$ |
We can do a similar thing with "double and add" algorithm, with elliptic curve additions. The $+$ symbol is used to denote a binary operation. Let's say we need to add some curve point $P$, $n$ number of times. The double and add algorithm comes into play. Let $n$ be $11$ like above.
| Digit | Operation | Result |
|---|---|---|
| $1$ | $P$ | $P$ |
| $0$ | $P + P$ | $2P$ |
| $1$ | $2P + 2P + P$ | $5P$ |
| $1$ | $5P + 5P + P$ | $11P$ |
Here's my question.
I have noticed the same result with multiplications. Take any complex number $p$, then multiply it with a natural number $n$, we can naively compute it as $p + p + \dots + p = np$. And we can also employ operations similar to the double and add (and of course, just multiplying using the distributive property will be much faster).
Which gets me thinking, let's say we have an operation $f:S \times S \rightarrow S$, where $S$ is any set that you want. And we want to apply $f(s, s)$, for $s \in S$, and we want to compose those operations a multitude of times, e.g. $f(f(f(\dots f(s,s) \dots, s), s), s)$, can it be generalized to also employ something similar to the "double and add", and "square and multiply" algorithms ?
Let's represent $f(s_1, s_2)$ as the infix operation $s_1 \bigcirc s_2$, and $ms$, as $s$ being applied the $\bigcirc$ operator $m-1$ number of times.
The table below is an example steps needed to compute $11s$.
| Digit | Operation | Result |
|---|---|---|
| $1$ | $s$ | $s$ |
| $0$ | $s \bigcirc s$ | $2s$ |
| $1$ | $2s \bigcirc 2s \bigcirc s$ | $5s$ |
| $1$ | $5s \bigcirc 5s \bigcirc s$ | $11s$ |
Will there ever be an operation $f:S \times S \rightarrow S$ where the "double and add" style algorithm is not going to work?