Let $M: \mathcal{L}(\mathbb{R}^n, \mathbb{R}^m) \to \mathbb{R}^{m \times n}$ be a linear trasnformation defined as $M(T) = S_T$ where $T \in \mathcal{L}(\mathbb{R}^n, \mathbb{R}^m)$ is a linear transformation from $\mathbb{R}^n$ onto $\mathbb{R}^m$ and $S_T$ is the standard matrix for T (applying T in each vector of the respective standard basis).
How do I verify whether M is injective or not? How about M being surjective?
I know that a linear transformation $T: V \to W$ is injective
- iff $\{v \in V; Tv = 0\} = \{0\}$
- iff $\{v_1, ..., v_n\}$ is linear independent then $\{Tv_1, ..., Tv_n\}$ is linear independent.
But I don't see how to apply those facts to that kind of verifications.
Injectivity will indeed suffice since the two vector spaces have the same dimension. Your question is essentially now how to show that the matrix representation for a linear transformation is well defined, in the sense that the same matrix representation implies the same linear transformation.
to do this, choose a basis $\{e_1\dots e_n\}$, and note that a matrix representation is nothing but a specification $e_i \mapsto a_{i1}+\dots+a_{im}$.
a quick lemma would be that $$T(e_i)=0, \forall i \in \{1, \dots n\} \implies T=0$$
This can be proven by rank nullity and noting that $\{e_i\}$ constitute a basis for the kernel.
From this, the result follows readily by checking that if $A(e_i)=B(e_i)$ for all $i$, then we have by linearity that $$(A-B)e_i=0$$ for all $i$, in which case we can apply the lemma, which implies the result.