Quick exponentiation of bit matrices

152 Views Asked by At

Is there a method for quickly rising to a power a matrix with only 0s and 1s? I am aware of the diagonalization method. However, it is general and requires a lot of work. Due to the constraint I imposed, can a faster method be devised? I need it for computing the initial states of LFSRs in cryptography.

1

There are 1 best solutions below

2
On BEST ANSWER

$\def\eqdef{\stackrel{\text{def}}{=}}$ There are a couple of things that will help you quickly raise a binary companion matrix to a large power using arithmetic mod $2$.

First, exponentiation by squaring and multplying will reduce the number of multiplications needed to raise an element of any semigroup to a power $\ q\ $ to $\ \left\lfloor\log_2q\right\rfloor+b-1\ $, where $\ b\ $ is the number of $1$s in the binary expansion of $\ q\ $. If $\ A\ $ is any semigroup element, and $$ q=\sum_{i=1}^b2^{\alpha_i}\ ,$$ where $\ 0=\alpha_1<\alpha_2<\dots<\alpha_b\ $, then $$ A^q=\prod_{i=1}^bA^{2^{\alpha_i}}\ . $$ If \begin{align} A_0&=A\hspace{3em}\text{ and}\\ A_j&=\big(A_{j-1}\big)^2 \end{align} for $\ j=1,2,\dots,\alpha_b\ $ then $\ A_j=A^{2^j}\ $, and $$ A^n=\prod_{i=1}^bA_{\alpha_i} $$ All the factors in this product can be computed with $\ \alpha_b=\left\lfloor\log_2q\right\rfloor\ $ multiplications—namely, the $\ \alpha_b\ $ successive squarings needed to compute $\ A_1,A_ 2,\dots,A_{\alpha_b}\ $—and computing the product itself then requires another $\ b-1\ $ multiplications.

Second, the companion matrix of a polynomial is very sparse, and the result of multiplying a column vector by it can be obtained very efficiently using multiplication of polynomials over $\ GF(2)\ $ modulo the matrix's polynomial. The companion matrix $\ A\ $ of a polynomial $\ f(x)=x^n+\sum_\limits{i=0}^{n-1}f_ix^i\ $ over $\ GF(2)\ $ is given by $$ A=\pmatrix{0&0&0&\dots&0&f_0\\1&0&0&\dots&0&f_1\\0&1&0&\dots&0&f_2\\\vdots&&\ddots&&\vdots&\vdots\\\vdots&&&\ddots&&\vdots\\0&0&0&\dots&1&f_{n-1}} $$ The relevance of this to linear feedback shift registers is that if $\ s_0,s_1,\dots,s_t,\dots\ $ is a sequence of bits satisfying the recursion $$ s_{t+n}=\sum_{i=0}^{n-1}f_is_{t+i}\pmod{2} $$ then $$ \big(s_{t+q},s_{t+q+1},\dots,s_{t+q+n-1}\big)=\big(s_t,s_{t+1},\dots,s_{t+n-1}\big)A^q\ , $$ with all arithmetic being carried out modulo $2$.

If $\ s=\big(s_0,s_1,\dots,s_{n-1}\big)^T\ $, and $\ S(x)\eqdef\sum_\limits{i=0}^{n-1}s_ix^i\ $, then the entries of $\ A^qs\ $ are given by the coefficients of the polynomial $$ x^qS(x)\pmod{f(x)}\ . $$ Computation of the polynomial $\ x^q\pmod{f(x)}\ $ can be carried out very efficiently with the square and multiply technique explained above. Because the arithmetic in all the computations is being carried out mod $2$, however, there are a couple of other minor efficiencies available. In particular, the square of a polynomial $\ \Theta(x)=\sum_\limits{i=0}^{n-1}\theta_ix^i\ $ is given by $$ \left(\sum_{i=0}^{n-1}\theta_ix^i\right)^2=\sum_{i=0}^{n-1}\theta_ix^{2i}\ $$ because $\ \sigma^2=\sigma\ $ when $\ \sigma\in\{0,1\}\ $ and $\ 2\equiv0\pmod{2}\ $. Since the computation of $\ x^q\pmod{f(x)}\ $ will entail lots of squaring and multiplying of polynomials mod $f(x)$ when $\ q\ $ is large, some work can also be saved by precomputing $\ x^i\pmod{f(x)}\ $ for $\ i=n+1,$$\,n+2,$$\,\dots,$$\,2(n-1)\ $.

If $\ e_i, i=1,2,\dots,n\ $ are the columns of the $\ n\times n\ $ unit matrix (over $\ GF(2)\ $) then $\ A^qe_j\ $ is the $\ j^\text{th}\ $ column of $\ A^q\ $ and $\ e_j=Ae_{j-1}\ $, for $\ j=2,3,\dots,n\ $, which means that the $\ j^\text{th}\ $ column of $\ A^q\ $ is $\ A^{q+j-1}e_1\ $, or, equivalently, its entries are the coefficients of the polynomial $$ x^{q+j-1}\pmod{f(x)}\ . $$ Using the method of exponentiating by squaring and multiplying explained above, these polynomials can be calculated very efficiently, even for quite large values of $\ q\ $.

Example

Even with the above-described techniques for reducing the work, I wouldn't care to carry out this procedure by hand for $\ n\ $ much above $\ 10\ $ or $\ q\ $ much above $\ 2^{10}\ $. To illustrate the method, I'll calculate $\ A^{100}\ $, where $\ A\ $ is the companion matrix of the polynomial $\ f(x)=x^5+x^3+x^2+1\ $: $$ A=\pmatrix{0&0&0&0&1\\1&0&0&0&0\\0&1&0&0&1\\0&0&1&0&1\\0&0&0&1&0}\ . $$ First, we precompute \begin{align} x^6&= xx^5\\ &\equiv x^4+x^3+x\pmod{f(x)}\\ x^7&=xx^6\\ &\equiv x^5+x^4+x^2\pmod{f(x)}\\ &\equiv x^4+x^3+1\pmod{f(x)}\\ x^8&=xx^7\\ &\equiv x^5+x^4+x\pmod{f(x)}\\ &\equiv x^4+x^3+x^2+x+1\pmod{f(x)}\ . \end{align} Now $$ x^{100}=x^{64}x^{32}x^4\ , $$ and \begin{align} x^{16}&=\big(x^8\big)^2\\ &\equiv x^8+x^6+x^4+x^2+1\pmod{f(x)}\\ &\equiv x^4\ . \end{align} As it happens, the randomish $5$-degree polynomial I chose for $\ f(x)\ $ happened to be a divisor of $\ x^{12}-1\ $ (and, yes, this was completely accidental) which makes the remaining computation much easier than it might otherwise have been. \begin{align} x^{32}&=\big(x^{16}\big)^2\\ &\equiv\big(x^4\big)^2\pmod{f(x)}\\ &\equiv x^4+x^3+x^2+x+1\pmod{f(x)}\\ x^{64}&=\big(x^{32}\big)^2\\ &\equiv x^8+x^6+x^4+x^2+1\pmod{f(x)}\\ &\equiv x^4\pmod{f(x)}\\ x^{32}x^4&\equiv\big(x^4+x^3+x^2+x+1\big)x^4\pmod{f(x)}\\ &\equiv x^8+x^7+x^6+x^5+x^4\pmod{f(x)}\\ &\equiv1\pmod{f(x)}\\ x^{100}&=x^{64}x^{32}x^4\\ &\equiv x^4\times1\pmod{f(x)}\ . \end{align} Once we discovered that $\ x^{12}\equiv1\pmod{f(x)}\ $, we could have obtained $\ x^{100}=x^4\big(x^{12}\big)^8\equiv x^4\pmod{f(x)}\ $ immediately, without wading through this last series of computations. I have included them, however, to illustrate the sort of calculation that would be needed for a less obliging polynomial. We can now read off the columns of $\ A^{100}\ $ from the coefficients of the polynomials \begin{align} x^{100}&\equiv x^4\pmod{f(x)}\\ x^{101}&\equiv x^5\pmod{f(x)}\\ &\equiv x^3+x^2+1\pmod{f(x)}\\ x^{102}&\equiv x^6\pmod{f(x)}\\ &=x^4+x^3+x\pmod{f(x)}\\ x^{103}&\equiv x^7\pmod{f(x)}\\ &=x^4+x^3+1\pmod{f(x)}\\ x^{104}&\equiv x^8\pmod{f(x)}\\ &=x^4+x^3+x^2+x+1\pmod{f(x)}\ . \end{align} That is, $$ A=\pmatrix{0&0&0&0&1\\1&0&0&0&0\\0&1&0&0&1\\0&0&1&0&1\\0&0&0&1&0}^{100}=\pmatrix{0&1&0&1&1\\0&0&1&0&1\\0&1&0&0&1\\0&1&1&1&1\\1&0&1&1&1}\ . $$

Extension to arbitrary binary square matrices

A square matrix $\ A\ $ over any field is always similar to a matrix in Frobenius normal form. This means that there exists an invertible matrix $\ P\ $ with entries in the field such that $$ P^{-1}AP=\pmatrix{C_1&O&\dots&O\\O&C_2&\dots&O\\\vdots&&\ddots&\vdots\\O&O&\dots&C_k} $$ where $\ C_i\ $ are companion matrices of factors of the minimal polynomial of $\ A\ $. When $\ A\ $ is binary matrix (with entries in $\ GF(2)\ $) then so are $\ P\ $ and $\ C_i\ $. A large power $\ A^q\ $ of $\ A\ $ can then be computed as $$ A^q=P\pmatrix{C_1^q&O&\dots&O\\O&C_2^q&\dots&O\\\vdots&&\ddots&\vdots\\O&O&\dots&C_k^q}P^{-1}\ , $$ and the powers $\ C_i^q\ $ of the companion matrices can all be computed efficiently by the method explained above.