Is there a method for quickly rising to a power a matrix with only 0s and 1s? I am aware of the diagonalization method. However, it is general and requires a lot of work. Due to the constraint I imposed, can a faster method be devised? I need it for computing the initial states of LFSRs in cryptography.
2026-03-29 09:18:46.1774775926
Quick exponentiation of bit matrices
152 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in LINEAR-ALGEBRA
- An underdetermined system derived for rotated coordinate system
- How to prove the following equality with matrix norm?
- Alternate basis for a subspace of $\mathcal P_3(\mathbb R)$?
- Why the derivative of $T(\gamma(s))$ is $T$ if this composition is not a linear transformation?
- Why is necessary ask $F$ to be infinite in order to obtain: $ f(v)=0$ for all $ f\in V^* \implies v=0 $
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Summation in subsets
- $C=AB-BA$. If $CA=AC$, then $C$ is not invertible.
- Basis of span in $R^4$
- Prove if A is regular skew symmetric, I+A is regular (with obstacles)
Related Questions in MATRICES
- How to prove the following equality with matrix norm?
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Powers of a simple matrix and Catalan numbers
- Gradient of Cost Function To Find Matrix Factorization
- Particular commutator matrix is strictly lower triangular, or at least annihilates last base vector
- Inverse of a triangular-by-block $3 \times 3$ matrix
- Form square matrix out of a non square matrix to calculate determinant
- Extending a linear action to monomials of higher degree
- Eiegenspectrum on subtracting a diagonal matrix
- For a $G$ a finite subgroup of $\mathbb{GL}_2(\mathbb{R})$ of rank $3$, show that $f^2 = \textrm{Id}$ for all $f \in G$
Related Questions in LINEAR-TRANSFORMATIONS
- Unbounded linear operator, projection from graph not open
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- A different way to define homomorphism.
- Linear algebra: what is the purpose of passive transformation matrix?
- Find matrix representation based on two vector transformations
- Is $A$ satisfying ${A^2} = - I$ similar to $\left[ {\begin{smallmatrix} 0&I \\ { - I}&0 \end{smallmatrix}} \right]$?
- Let $T:V\to W$ on finite dimensional vector spaces, is it possible to use the determinant to determine that $T$ is invertible.
- Basis-free proof of the fact that traceless linear maps are sums of commutators
- Assuming that A is the matrix of a linear operator F in S find the matrix B of F in R
- For what $k$ is $g_k\circ f_k$ invertible?
Related Questions in MATRIX-EQUATIONS
- tensor differential equation
- Can it be proved that non-symmetric matrix $A$ will always have real eigen values?.
- Real eigenvalues of a non-symmetric matrix $A$ ?.
- How to differentiate sum of matrix multiplication?
- Do all 2-variable polynomials split into linear factors over the space of $2 \times 2$ complex matrices?
- Big picture discussion for iterative linear solvers?
- Matrix transformations, Eigenvectors and Eigenvalues
- Jordan chevaley decomposition and cyclic vectors
- If $A$ is a $5×4$ matrix and $B$ is a $4×5$ matrix
- Simplify $x^TA(AA^T+I)^{-1}A^Tx$
Related Questions in CRYPTOGRAPHY
- What exactly is the definition of Carmichael numbers?
- What if Eve knows the value of $S$ in digital signiture?
- Relative prime message in RSA encryption.
- Encryption with $|K| = |P| = |C| = 1$ is perfectly secure?
- Cryptocurrency Math
- DLP Relationship of primitive roots $\pmod{p}$ with $p$ and $g$
- Hints to prove $2^{(p−1)/2}$ is congruent to 1 (mod p) or p-1 (mod p)
- Period of a binary sequence
- generating function / stream cipher
- RSA, cryptography
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
$\def\eqdef{\stackrel{\text{def}}{=}}$ There are a couple of things that will help you quickly raise a binary companion matrix to a large power using arithmetic mod $2$.
First, exponentiation by squaring and multplying will reduce the number of multiplications needed to raise an element of any semigroup to a power $\ q\ $ to $\ \left\lfloor\log_2q\right\rfloor+b-1\ $, where $\ b\ $ is the number of $1$s in the binary expansion of $\ q\ $. If $\ A\ $ is any semigroup element, and $$ q=\sum_{i=1}^b2^{\alpha_i}\ ,$$ where $\ 0=\alpha_1<\alpha_2<\dots<\alpha_b\ $, then $$ A^q=\prod_{i=1}^bA^{2^{\alpha_i}}\ . $$ If \begin{align} A_0&=A\hspace{3em}\text{ and}\\ A_j&=\big(A_{j-1}\big)^2 \end{align} for $\ j=1,2,\dots,\alpha_b\ $ then $\ A_j=A^{2^j}\ $, and $$ A^n=\prod_{i=1}^bA_{\alpha_i} $$ All the factors in this product can be computed with $\ \alpha_b=\left\lfloor\log_2q\right\rfloor\ $ multiplications—namely, the $\ \alpha_b\ $ successive squarings needed to compute $\ A_1,A_ 2,\dots,A_{\alpha_b}\ $—and computing the product itself then requires another $\ b-1\ $ multiplications.
Second, the companion matrix of a polynomial is very sparse, and the result of multiplying a column vector by it can be obtained very efficiently using multiplication of polynomials over $\ GF(2)\ $ modulo the matrix's polynomial. The companion matrix $\ A\ $ of a polynomial $\ f(x)=x^n+\sum_\limits{i=0}^{n-1}f_ix^i\ $ over $\ GF(2)\ $ is given by $$ A=\pmatrix{0&0&0&\dots&0&f_0\\1&0&0&\dots&0&f_1\\0&1&0&\dots&0&f_2\\\vdots&&\ddots&&\vdots&\vdots\\\vdots&&&\ddots&&\vdots\\0&0&0&\dots&1&f_{n-1}} $$ The relevance of this to linear feedback shift registers is that if $\ s_0,s_1,\dots,s_t,\dots\ $ is a sequence of bits satisfying the recursion $$ s_{t+n}=\sum_{i=0}^{n-1}f_is_{t+i}\pmod{2} $$ then $$ \big(s_{t+q},s_{t+q+1},\dots,s_{t+q+n-1}\big)=\big(s_t,s_{t+1},\dots,s_{t+n-1}\big)A^q\ , $$ with all arithmetic being carried out modulo $2$.
If $\ s=\big(s_0,s_1,\dots,s_{n-1}\big)^T\ $, and $\ S(x)\eqdef\sum_\limits{i=0}^{n-1}s_ix^i\ $, then the entries of $\ A^qs\ $ are given by the coefficients of the polynomial $$ x^qS(x)\pmod{f(x)}\ . $$ Computation of the polynomial $\ x^q\pmod{f(x)}\ $ can be carried out very efficiently with the square and multiply technique explained above. Because the arithmetic in all the computations is being carried out mod $2$, however, there are a couple of other minor efficiencies available. In particular, the square of a polynomial $\ \Theta(x)=\sum_\limits{i=0}^{n-1}\theta_ix^i\ $ is given by $$ \left(\sum_{i=0}^{n-1}\theta_ix^i\right)^2=\sum_{i=0}^{n-1}\theta_ix^{2i}\ $$ because $\ \sigma^2=\sigma\ $ when $\ \sigma\in\{0,1\}\ $ and $\ 2\equiv0\pmod{2}\ $. Since the computation of $\ x^q\pmod{f(x)}\ $ will entail lots of squaring and multiplying of polynomials mod $f(x)$ when $\ q\ $ is large, some work can also be saved by precomputing $\ x^i\pmod{f(x)}\ $ for $\ i=n+1,$$\,n+2,$$\,\dots,$$\,2(n-1)\ $.
If $\ e_i, i=1,2,\dots,n\ $ are the columns of the $\ n\times n\ $ unit matrix (over $\ GF(2)\ $) then $\ A^qe_j\ $ is the $\ j^\text{th}\ $ column of $\ A^q\ $ and $\ e_j=Ae_{j-1}\ $, for $\ j=2,3,\dots,n\ $, which means that the $\ j^\text{th}\ $ column of $\ A^q\ $ is $\ A^{q+j-1}e_1\ $, or, equivalently, its entries are the coefficients of the polynomial $$ x^{q+j-1}\pmod{f(x)}\ . $$ Using the method of exponentiating by squaring and multiplying explained above, these polynomials can be calculated very efficiently, even for quite large values of $\ q\ $.
Example
Even with the above-described techniques for reducing the work, I wouldn't care to carry out this procedure by hand for $\ n\ $ much above $\ 10\ $ or $\ q\ $ much above $\ 2^{10}\ $. To illustrate the method, I'll calculate $\ A^{100}\ $, where $\ A\ $ is the companion matrix of the polynomial $\ f(x)=x^5+x^3+x^2+1\ $: $$ A=\pmatrix{0&0&0&0&1\\1&0&0&0&0\\0&1&0&0&1\\0&0&1&0&1\\0&0&0&1&0}\ . $$ First, we precompute \begin{align} x^6&= xx^5\\ &\equiv x^4+x^3+x\pmod{f(x)}\\ x^7&=xx^6\\ &\equiv x^5+x^4+x^2\pmod{f(x)}\\ &\equiv x^4+x^3+1\pmod{f(x)}\\ x^8&=xx^7\\ &\equiv x^5+x^4+x\pmod{f(x)}\\ &\equiv x^4+x^3+x^2+x+1\pmod{f(x)}\ . \end{align} Now $$ x^{100}=x^{64}x^{32}x^4\ , $$ and \begin{align} x^{16}&=\big(x^8\big)^2\\ &\equiv x^8+x^6+x^4+x^2+1\pmod{f(x)}\\ &\equiv x^4\ . \end{align} As it happens, the randomish $5$-degree polynomial I chose for $\ f(x)\ $ happened to be a divisor of $\ x^{12}-1\ $ (and, yes, this was completely accidental) which makes the remaining computation much easier than it might otherwise have been. \begin{align} x^{32}&=\big(x^{16}\big)^2\\ &\equiv\big(x^4\big)^2\pmod{f(x)}\\ &\equiv x^4+x^3+x^2+x+1\pmod{f(x)}\\ x^{64}&=\big(x^{32}\big)^2\\ &\equiv x^8+x^6+x^4+x^2+1\pmod{f(x)}\\ &\equiv x^4\pmod{f(x)}\\ x^{32}x^4&\equiv\big(x^4+x^3+x^2+x+1\big)x^4\pmod{f(x)}\\ &\equiv x^8+x^7+x^6+x^5+x^4\pmod{f(x)}\\ &\equiv1\pmod{f(x)}\\ x^{100}&=x^{64}x^{32}x^4\\ &\equiv x^4\times1\pmod{f(x)}\ . \end{align} Once we discovered that $\ x^{12}\equiv1\pmod{f(x)}\ $, we could have obtained $\ x^{100}=x^4\big(x^{12}\big)^8\equiv x^4\pmod{f(x)}\ $ immediately, without wading through this last series of computations. I have included them, however, to illustrate the sort of calculation that would be needed for a less obliging polynomial. We can now read off the columns of $\ A^{100}\ $ from the coefficients of the polynomials \begin{align} x^{100}&\equiv x^4\pmod{f(x)}\\ x^{101}&\equiv x^5\pmod{f(x)}\\ &\equiv x^3+x^2+1\pmod{f(x)}\\ x^{102}&\equiv x^6\pmod{f(x)}\\ &=x^4+x^3+x\pmod{f(x)}\\ x^{103}&\equiv x^7\pmod{f(x)}\\ &=x^4+x^3+1\pmod{f(x)}\\ x^{104}&\equiv x^8\pmod{f(x)}\\ &=x^4+x^3+x^2+x+1\pmod{f(x)}\ . \end{align} That is, $$ A=\pmatrix{0&0&0&0&1\\1&0&0&0&0\\0&1&0&0&1\\0&0&1&0&1\\0&0&0&1&0}^{100}=\pmatrix{0&1&0&1&1\\0&0&1&0&1\\0&1&0&0&1\\0&1&1&1&1\\1&0&1&1&1}\ . $$
Extension to arbitrary binary square matrices
A square matrix $\ A\ $ over any field is always similar to a matrix in Frobenius normal form. This means that there exists an invertible matrix $\ P\ $ with entries in the field such that $$ P^{-1}AP=\pmatrix{C_1&O&\dots&O\\O&C_2&\dots&O\\\vdots&&\ddots&\vdots\\O&O&\dots&C_k} $$ where $\ C_i\ $ are companion matrices of factors of the minimal polynomial of $\ A\ $. When $\ A\ $ is binary matrix (with entries in $\ GF(2)\ $) then so are $\ P\ $ and $\ C_i\ $. A large power $\ A^q\ $ of $\ A\ $ can then be computed as $$ A^q=P\pmatrix{C_1^q&O&\dots&O\\O&C_2^q&\dots&O\\\vdots&&\ddots&\vdots\\O&O&\dots&C_k^q}P^{-1}\ , $$ and the powers $\ C_i^q\ $ of the companion matrices can all be computed efficiently by the method explained above.