For a square matrix $M$, I was taught the following algorithm to find the minimum polynomial: First, choose a random vector $v$ and hope it isn't an eigenvector of $M$, and compute $$Mv$$ Create the matrix $$\begin{bmatrix}v\end{bmatrix} \begin{bmatrix}Mv\end{bmatrix}$$ And row reduce. If it is linearly independent, repeat the process to create $$\begin{bmatrix}v\end{bmatrix} \begin{bmatrix}Mv\end{bmatrix}\begin{bmatrix}M^2v\end{bmatrix}$$ And row reduce again. Repeat until the composed matrix is linearly dependent. Now, just read off of the final column vector to compute the minimum polynomial in the form $$M^nv = a_{n-1}M^{n-1}v +a_{n-2}M^{n-2}v+.....+a_1Mv+a_0 $$ Which all makes sense. I get the advantages here, for example if you ever see a relationship like $Mv=2M^2v$ in the composed matrix, then you are done since the minimum polynomial is just $2M^2v-Mv=0 \rightarrow 2x^2-x=0$. However, there are clear disadvantages like choosing an eigenvector for the random vector, which would lead to this recursive algorithm never ending. My question is, is there a better, more reliable algorithm? I understand that this is a good algorithm for computing the minimum polynomial by hand, but it clearly has some faults. I will be tested on this on paper, is there a better way to do this?
2026-03-29 10:27:43.1774780063
On
A better algorithm to find the minimum polynomial of a matrix?
654 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
2
There are 2 best solutions below
1
On
One way to do it is you know that the minimal polynomial divides the characteristic polynomial and that the minimal polynomial and characteristic polynomial have the same distinct roots. Therefore you can reduce the degrees of repeated roots in the characteristic polynomial to see if the minimal polynomial is of smaller degree than the characteristic polynomial. Another way you can compute the minimal polynomial is by computing the Jordan canonical form of the matrix. The multiplicity of $\lambda$ as a root in the minimal polynomial will then be the dimension of the largest Jordan block associated with $\lambda$.
Let $M\in M_n(\mathbb{Q})$ and $r$ be the degree of its minimal polynomial. Starting from a random vector $v$, the method that you expose works with probability $1$ (it's not obvious to prove; we can do that, using the Frobenius form).
Using sophisticated methods, the calculation of the minimal polynomial $m_A$ has complexity $O(n^3)$. Thus, the above method is interesting only when $r/n$ is small. Indeed, the calculation of $(M^kv)_{k\leq r+1}$ has complexity $O(rn^2)$ and the calculation of the reduced row echelon form of these vectors too.
Conclusion. The considered method is fast when $r=o(n)$ but, above all, it is very easy to program.
Now, you want to use this method -by hand- for $n\leq 4$. I don't see the interest. Indeed, if $\chi_A$, the characteristic polynomial of $A$ has simple roots, then $m_A=\chi_A$; otherwise, $\chi_A$ has a multiple root that is rational and that you can explicitly calculate.