Fastest way to perform this multiplication expansion?

74 Views Asked by At

Consider a product chain:

$$(a_1 + x)(a_2 + x)(a_3 + x)\cdots(a_n + x)$$

Where $x$ is an unknown variable and all $a_i$ terms are known positive integers.

Is there an efficient way to expand this?

3

There are 3 best solutions below

4
On

The most concise way to expand this is with the use of elementary symmetric polynomials. For each integer $k\in\{0,\ldots,n\}$ the $k$-th elementary symmetric polynomial in $a_1,\ldots,a_n$ is defined as $$e_{n,k}:=\sum_{1\leq j_1<\ldots<j_k\leq n}a_{j_1}\ldots a_{j_k}.$$ In words $e_k$ is the sum of all products of $k$-tuples from $a_1,\ldots,a_n$. For example, if $n=4$ then \begin{eqnarray*} e_{4,1}&=&a_1+a_2+a_3+a_4,\\ e_{4,2}&=&a_1a_2+a_1a_3+a_1a_4+a_2a_3+a_2a_4+a_3a_4,\\ e_{4,3}&=&a_1a_2a_3+a_1a_2a_4+a_1a_3a_4+a_2a_3a_4,\\ e_{4,4}&=&a_1a_2a_3a_4 \end{eqnarray*} Then the expansion of this product chain is $$\sum_{k=0}^ne_{n,k}x^{n-k}=e_{n,0}x^n+e_{n,1}x^{n-1}+\ldots+e_{n,n-1}x+e_{n,n}.$$

0
On

The constant term is easy to compute, it's just the product of the $a_i$. The coefficient of $x^{n-1}$ is $\sum_i a_i$, so that is also easy.

Assume for a moment that $n = 2k$ is even, then the coefficient of $x^k$ is the sum of all possible products consisting of $k$ different $a_i$. There are $\binom{n}{k} \approx 2^n/\sqrt{n} $ such products and there is no obvious way to simplify their sum in the general case.

So I suspect that in the general case of large $n$ there is no efficient way to obtain the expansion.

0
On

You might consider the Borodin-Moenck-scheme for computing such a product (see page 372 in "Fast Modular Transforms"). The key idea is to replace $n-1$ successive polynomial multiplications of a degree-1 and a degree-$i$-polynomial by $n-1$ "balanced" multiplications of polynomials of equal degree (or almost equal, if $n$ is not a power of two). That is, in the first stage compute the products $$ f_{1,2} = (x+a_1)(x+a_2),\; f_{3,4} =(x+a_3)(x+a_4), \dots, f_{n-1,n} = (x+a_{n-1})(x+a_n); $$ in the second stage compute $$ f_{1,4} = f_{1,2} \cdot f_{3,4}, \dots, f_{n-3,n} = f_{n-3,n-2} \cdot f_{n-1,n}; $$ and so on, until you finally compute $$ f_{1,n} = f_{1,n/2} \cdot f_{n/2+1,n}. $$ Using fast polynomial multiplication (with pseudo-linear arithmetic complexity), each level will cost $\tilde O(n)$ arithmetic operations, where $\tilde O(\cdot)$ ignores poly-logarithmic factors for simplicity. Because the depth of the multiplication tree is $\log n$, the overall arithmetic complexity will be $\tilde O (n \log n) = \tilde O(n)$, which is essentially optimal.

When it comes to the bit complexity, my gut feeling is that the same approach is also close to optimal; if all the $a_i$ are of comparable magnitudes, the coefficients of the polynomials in this computation tree will also be balanced, and that's when fast multiplication techniques live up to their full potential. It might be different if you know that the $a_i$ have significantly different size; in that case, there will probably be a trade-off between the degree of the polynomials and their bitsize.