Usually the inverse of a square $n \times n$ matrix $A$ is defined as a matrix $A'$ such that:
$A \cdot A' = A' \cdot A = E$
where $E$ is the identity matrix.
From this definition they prove uniqueness but using significantly the fact that $A'$ is both right and left inverse.
But what if... we define right and left inverse matrices separately. Can we then prove that:
(1) the right inverse is unique (when it exists)
(2) the left inverse is unique (when it exists)
(3) the right inverse equals the left one
I mean the usual definition seems too strong to me. Why is the inverse introduced this way? Is it because if the inverse is introduced the way I mention, these three statements cannot be proven?
Given $n \times n$ matrices $A$ and $B$ for which holds $AB = I$ where $I$ is the $n \times n$ identity matrix. Now we will try to solve
$BX = I$
for an unknown $n \times n$ matrix $X$. We (left-)multiply both sides by $A$ to get
$ABX = AI$
which gives us
$IX = A$
so $X = A$ (because $IX = X$) and therefore $BA = I$. So we just showed $AB = I$ implies
$AB = BA = I$.
So as you can see it is not necessary to introduce a second definition for the inverse because it will lead to the same. Instead of $B$ we write $A^{-1}$.
Existence of a solution to $BX = I$ is because it can be shown $B$ is a bijective mapping ($x \mapsto Bx$) from $\mathbb{R}^{n}$ to $\mathbb{R}^{n}$.