Equating Matrices

238 Views Asked by At

I'm looking for an explanation to the following:

If $$Ax = \lambda x$$ is an eigenvalue equation, where A is a non-diagonal matrix. I can rewrite this as $$Ax = \lambda\mathbb{I} x$$

where the matrix $$\lambda \mathbb{I}$$ is diagonal. It is clear that $$A \neq \lambda\mathbb{I}$$ because A is non-diagonal. My question is when can you look at a matrix equation like $$Ax=Bx$$ and conclude that $$A = B$$

4

There are 4 best solutions below

0
On BEST ANSWER

You've missed the fact that the eigenvalue equation is valid for a specific choice of $x$ and $\lambda$, not for all $x$ and the same $\lambda$. As already pointed out a specific eigenvector only provides information about a part of the matrix in question.

What you need is an equation of the form $AM=BM$ with an invertible matrix $M\in\mathbb{R}^{n\times n}$, for example a basis of $\mathbb{R}^n$. Then you could right-multiply with the inverse $M^{-1}$ in order to get $A=B$.

0
On

Let $e_1$ denote the column vector $e_1=(1,0,0,\ldots,0)^T$. For any matrix carry out the matrix multiplication and you will note that $Ae_1$ is the first column of $A$. If $Ae_1=Be_1$ we can only conclude both the matrices have the same first column, and other columns can very well be unequal.

0
On

Let $A = \begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{pmatrix}$ and $B = \begin{pmatrix}b_{11}&b_{12}\\b_{21}&b_{22}\end{pmatrix}$

Then with $u = \begin{pmatrix}x\\y\end{pmatrix}$ we got :

$$\forall u,Au = Bu \Leftrightarrow \forall (x,y), \begin{matrix}a_{11}x+a_{12}y = b_{11}x+b_{12}y \\ a_{21}x+a_{22}y =b_{21}x+b_{22}y \end{matrix} \Leftrightarrow A=B$$

Note that this is true only if $Au=Bu$ for all $u$.

0
On

In general, there is no easy way to determine when an equation $Ax = Bx$ implies $A = B$. You have noted one of many cases where this does not hold, and in fact, it is easy to see why there is no reason for it to hold in general.

Multiplying a matrix to a vector is equivalent to applying a linear transformation to said vector. There are many different linear transformations, such as reflections, rotations, stretches, contractions, and combinations of these. Essentially, $Ax$ is the vector you get after applying some linear transformation represented by $A$ to the vector $x$. Similarly, $Bx$ is the vector you get after applying a linear transformation represented by $B$ to the vector $x$.

If it so happens that these resultant vectors are equal I should in no way assume that it was the same transformation that acted on each of them to get this result. For instance, starting with the vector $x = (1,0)^T$ I can apply a rotation clockwise by $\pi/2$ to obtain $Ax = (0,1)^T$. Or I can apply a rotation clockwise by $3\pi/2$ and obtain $Bx = (0,1)^T$.

The point is that two different transformations applied to $x$ resulted in the same vector in the end. It shouldn't be hard to see that there are many many more transformations that can also be used to get to the same vector $(0,1)^T$.