For a given linear mapping $\Phi:V\rightarrow W$, where $V$ and $W$ are vector spaces, the rank-nullity theorem states that $\dim(\ker(\Phi))+\dim(\mathrm{Im}(\Phi))=\dim(V).$ A textbook that I'm studying (Mathematics for Machine Learning by Deisenroth et al.) makes the following claim (p. 60):
If $\dim(V)=\dim(W)$, then $\Phi$ is bijective (since $\mathrm{Im}(\Phi)\subseteq W$).
This doesn't seem to be correct though because if $\Phi$ is bijective, then it's injective (one-to-one). Injectivity of $\Phi$ implies that only one element in $V$ can map to $\mathbf{0_W}$ in $W$. Consequently, $\ker(\Phi)$ would need to be trivial (only contain $\mathbf{0_V}$). The condition that $\dim(V)=\dim(W)$ doesn't seem to imply that $\ker(\Phi)$ is trivial.
My question is: Should this condition be $\dim(V)=\dim(\mathrm{Im}(\Phi))$ instead of $\dim(V)=\dim(W)$?
Thank you in advance!
(As a side note, MML has been a delight to learn from so far!)