Inverse of matrix - definition

359 Views Asked by At

Usually the inverse of a square $n \times n$ matrix $A$ is defined as a matrix $A'$ such that:

$A \cdot A' = A' \cdot A = E$

where $E$ is the identity matrix.

From this definition they prove uniqueness but using significantly the fact that $A'$ is both right and left inverse.

But what if... we define right and left inverse matrices separately. Can we then prove that:

(1) the right inverse is unique (when it exists)
(2) the left inverse is unique (when it exists)
(3) the right inverse equals the left one

I mean the usual definition seems too strong to me. Why is the inverse introduced this way? Is it because if the inverse is introduced the way I mention, these three statements cannot be proven?

3

There are 3 best solutions below

14
On

Given $n \times n$ matrices $A$ and $B$ for which holds $AB = I$ where $I$ is the $n \times n$ identity matrix. Now we will try to solve

$BX = I$

for an unknown $n \times n$ matrix $X$. We (left-)multiply both sides by $A$ to get

$ABX = AI$

which gives us

$IX = A$

so $X = A$ (because $IX = X$) and therefore $BA = I$. So we just showed $AB = I$ implies

$AB = BA = I$.

So as you can see it is not necessary to introduce a second definition for the inverse because it will lead to the same. Instead of $B$ we write $A^{-1}$.

Existence of a solution to $BX = I$ is because it can be shown $B$ is a bijective mapping ($x \mapsto Bx$) from $\mathbb{R}^{n}$ to $\mathbb{R}^{n}$.

2
On

The proposed, accepted answer, has a bug. I mean, the conclusion $(AB=I)\Rightarrow (BA=I)$ is true for $n\times n$ matrices, but the proof is wrong.

Suppose that $AB=I$.

IF there exists $X$ so that $BX=I$ -- which you don't know --

THEN you can conclude that $ABX=AI$, whence $X=A$.

But may be $X$ does not exist. In fact the proposed proof never used the fact that the matrices are square $n\times n$, which is crucial.

Example. Suppose $A=\begin{pmatrix}1&0&0\\0&1&0\end{pmatrix}$ and $B=A^t=\begin{pmatrix}1&0\\ 0&1\\ 0&0\end{pmatrix}$

Then $AB=I$, more precisely $AB=I_2$ the identity $2\times 2$.

IF it would exist $X$ so that $BX=I$, more precisely $BX=I_3$ the identidy $3\times 3$, then one could conclude that $X=A$. BUT in fact $BA=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&0\end{pmatrix}$

It is still true, for $n\times n$ matrices, that $AB=I$ if and only if $BA=I$, but this fact has not a two-lines proof. (See for instance this question).

Note: the above example can be adapted to "square" matrices of infinite size. That is to say, endomorphisms of infinite dimensional vector spaces that have a left inverse need not to have a right inverse.

Finally, to give a complete answer to the original question, observe that any matrix $C=\begin{pmatrix}1&0\\ 0&1\\ x&y\end{pmatrix}$ is a right-inverse of $A$. (And so $C^t$ is a left inverse of $B$). Which shows that right and left inverses are not unique outside the realm of $n\times n$ matrices.

So yes, the fact that for $n \times n$ matrices a right inverese is also a left inverse is crucial.

5
On

So... square matrices over a field are special: one can prove that if a square matrix has a left inverse, then it also has a right inverse. And once you know that, it is easy to show that the left and right inverse must be equal. This holds in general for groups/rings: if $x$ is an element, and there exist $a,b$ such that $ax=1$ and $xb=1$, then $a=b$: $$a = a1 = a(xb) = (ax)b = 1b = b.$$

(Use capital letters and the identity for matrices).

Now, if $A$ is a square matrix over a field, and there exists a matrix $B$ such that $AB=I$, then interpret $A$ and $B$ as linear transformations. Then $AB$ is bijective, hence $A$ is surjective; but that means that $A$ is full rank, hence by the Dimension Theorem has trivial nullity, hence $A$ is also injective. Since $A$ is therefore a bijection, it has a left inverse $C$ (which is also linear, and thus corresponds to a matrix), and thus $A$ has a two-sided inverse.

Similarly, if $A$ has a left inverse $C$ with $CA=I$, then $A$ is one-to-one, hence full rank and so surjective, so it has a right inverse and the argument proceeds as before.

In arbitrary rings you can have elements that have a left inverse but no right inverse; in such cases, you will have multiple left inverses but no right inverse. For if $x$ is an element such that there exists $a$ with $ax=1$, but $xb\neq 1$ for all $b$, then $xa\neq 1$, so $xa-1\neq 0$. Then $(xa-1)x = xax-x = x-x = 0$, so then $(a+xa-1)x = ax+(xa-1)x = 1$, but $a+xa-1\neq a$. Thus, $x$ has at least two left inverses. A symmetric argument shows that if $x$ has a right inverse but no left inverse, then it has at least two right inverses.

And yes, it is possible to have rings in which some elements have left inverses but no right inverses. Consider the vector space $\mathbb{R}[x]$ of all polynomials with coefficients in $\mathbb{R}$, and the ring of all linear transformations from $\mathbb{R}$ to itself. The linear transformation $T(p(x)) = xp(x)$ is one-to-one but not onto, so it has left inverse but no right inverse. In fact, it has infinitely many left inverses. Similarly, the linear transformation $T(p(x)) = p'(x)$ (the derivative) is onto, but not one-to-one, so it has a right inverse (in fact, infinitely many), but no left inverse.