Finding the Jordan Normal Form; relative basis?

128 Views Asked by At

enter image description here

So I'm reading this example about how to compute the JNF of this 3x3 matrix, and I'm confused about the step determining the vector $f$ (which I think was accidentally called $e$ when they said it was equal to $(0 \space\space\space\space\space\space 3 \space \space -5)^T$. What exactly does it mean to "reduce the latter vectors using the former"? Also, once you have this basis, how do you know its block sizes?

1

There are 1 best solutions below

0
On BEST ANSWER

That’s the problem with reading course notes that you find on the Internet out of context, isn’t it? They refer to things explained earlier in the course. In this case, you have to go back to lecture 5 of the course, in which Dr. Dotsenko defines a “relative basis” and how to compute it. It’s a somewhat idiosyncratic term for what’s more commonly called extending a basis to a larger subspace.

The idea is that if you have subspaces $W'\subset W$ of some vector space $V$ with bases $\{w_1,\dots,w_m\}$ and $\{v_1,\dots,v_n\}$, a way to get a basis of $W$ that is a superset of a basis of $W'$ is to first column-reduce the matrix $\begin{bmatrix}w_1&\cdots&w_m\end{bmatrix}$ to get a “nicer” basis for $W'$ and then use the resulting pivots to column-reduce $\begin{bmatrix}v_1&\cdots&v_n\end{bmatrix}$. Of course, you can work with the transposes so that you perform the more familiar row-reduction.

So, in the example in your question, we have the basis vector $(1,-1,5)^T$ for $\ker(A-I)$ and want to find a basis of $\ker(A-I)^2$ that includes this vector. The other vector in the basis will be the generalized eigenvector that generates the Jordan chain (which he calls a “thread”). So, we row-reduce $$\begin{bmatrix}1&-1&5 \\ \hline 1&2&0 \\3&0&10\end{bmatrix} \to \begin{bmatrix}1&-1&5\\\hline0&3&-5\\0&3&-5\end{bmatrix}$$ to get the generalized eigenvector $\mathbf e=(0,3,-5)^T$.

In example 2 of the handout that you’re asking about, we similarly have $$\begin{bmatrix}1&-1&1&0 \\ \hline1&0&3&0 \\ -2&3&0&0\end{bmatrix} \to \begin{bmatrix}1&-1&1&0 \\ \hline 0&1&2&0 \\ 0&1&2&0 \end{bmatrix}$$ and $$\begin{bmatrix}0&0&1&-1 \\ \hline \frac14&\frac14&1&0 \\ \frac14&\frac14&0&1\end{bmatrix} \to \begin{bmatrix}0&0&1&-1 \\ \hline \frac14&\frac14&0&1 \\ \frac14&\frac14&0&1\end{bmatrix}.$$ Why he chose multiples of the originally-computed kernel basis vectors so that the components were integers in the first two computations but not in the last is unexplained.