One part of the invertible matrix theorem is the statement that a square-matrix is invertible iff its kernel $=\{0\}$. The "only if" direction seems pretty easy. But I wasn't so sure about the "if" part. My attempt for it is below. Is it correct?
Let $A$ be an $n$-by-$n$ matrix that acts as a linear map from a vector space $U$ to a vector space $V$ (each of dimension $n$ over a field, $F$) with kernel=$\{0\}$.
If, for any $x$ and $y\in U$, $Ax=Ay$, then $Ax-Ay=0\implies A(x-y)=0\implies x-y\in \operatorname{ker}(A)=\{0\}$ so $x=y$. This means $A$ is injective.
Since $\operatorname{ker}(A)=\{0\}$, $\operatorname{dim}(\operatorname{ker}(A))=0$.
By the rank-nullity theorem, then $\operatorname{dim}(U)=\operatorname{dim}(\operatorname{range}(A))$. But $\operatorname{dim}(V)=n=\operatorname{dim}(U)$, so $\operatorname{dim}(\operatorname{range}(A))=\operatorname{dim}(V)$.
Since $\operatorname{range}(A)$ is a subspace of $V$, it must be that $\operatorname{range}(A)=V$ (as they have the same dimension), so that $A$ is surjective.
$A$ is bijective and so has an inverse, $A^{-1}$. This inverse is likewise a linear map from $V$ to $U$ as, for any $u_1,u_2\in U$, $A(u_1+u_2)=A(u_1)+A(u_2)$ so, letting $u_1=A^{-1}(v_1)$ and $u_2=A^{-1}(v_2)$ (where $v_1,v_2$ can be any elements in $V$)
$A(A^{-1}(v_1)+A^{-1}(v_2))=A(A^{-1}(v_1))+A(A^{-1}(v_2))=v_1+v_2\\\implies A^{-1}(v_1)+A^{-1}(v_2)=A^{-1}(v_1+v_2)$ for all $v_1,v_2\in V$. Also, for any $u\in U$ and $c\in F$, $A(cu)=cA(u)$, so letting $u=A^{-1}(v)$ where $v$ is any element in $V$,
$A(cA^{-1}(v))=cA(A^{-1}(v))=cv\implies cA^{-1}(v)=A^{-1}(cv)$ for any $v\in V$ and any $c\in F$.