Let T:V->W. Is it possible to have multiple T's that have the same dim(ker(T)) dimensions of the kernel of T ?
If not why ?
Let T:V->W. Is it possible to have multiple T's that have the same dim(ker(T)) dimensions of the kernel of T ?
If not why ?
On
Yes, you can for example use $T_1:y=x$ and $T_2:y=2x$.
$V=W=\Bbb{R}$, $dim(ker(T_1))=dim(ket(T_2))=0$
On
Let $V$ be a $n\geq 1$ dimensional vector space, Suppose $dim(ker(T))=m \leq n$. Pick a basis $B=\{v_1,...,v_n\}$ for $V$ and a basis $C=\{w_1,...,w_s\}$ for $W$. Select any $m$ vectors $\{u_1,...,u_m\}$ in $B$. Construct any $H:V \rightarrow W$ such that $H$ sends all $u_i$'s to 0 and maps the remaining vectors in $B$ injectively to some vectors in $C$. The resulting map has a kernel spanned by $\{u_1,...,u_m\}$ and hence will have dimension $m$. The number of maps like this is $\binom{n}{m}*(n-m)*s$.
Let's look at an example to understand the question a bit better.
Let's for simplicity work with $\mathbb R$-vector spaces, i.e. $V = \mathbb R^n$ and $W = \mathbb R^m$ - when I picture things, I like to think $V = \mathbb R^2 = W$. So we just have a 2-dimensional plane.
What's a linear map $V \to W$ now? It's just some function $T: \mathbb R^2 \to \mathbb R^2$ such that $T(\lambda a+b) = \lambda T(a) + T(b)$ for any $\lambda \in R$ and $a,b \in \mathbb R^2$. It means exactly that I can define $T$ on basis vectors and this will determine how $T$ acts on all of $\mathbb R^2$. Let's pick $\{(1,0),(0,1)\}$ as a basis here; it doesn't really matter what basis we pick, but this one is familiar and easy. Write $(x,y) \in \mathbb R^2$ to mean $x(1,0) + y(0,1)$. Suppose we set $T((1,0)) =w_1$ and $T((0,1)) = w_2$. Then $$T((x,y)) = T(x(1,0) + y(0,1)) = xT(1,0) + yT(0,1) = xw_1 + yw_2. $$ Hence $T$ really is determined by the images we picked for our basis elements.
Now how can we picture such a map? The thing to note here is that the image of $T$ is spanned by $w_1$ and $w_2$ (which in this situation are elements of $\mathbb R^2$). This means that the image of $T$ is either $\mathbb R^2$ (if $w_1$ and $w_2$ are linearly independent), some one-dimensional subspace isomorphic to $\mathbb R$ (if $w_1$ and $w_2$ are non-zero and are scalar multiples of each other), or zero, if $w_1 = 0 = w_2$.
This also gives us a way of mapping $\mathbb R^2$ linearly onto any line through the origin: say we want the line $ay = bx$. Then we just pick $w_1 = (b,a) = w_2$, i.e. we define $T$ through $T((1,0)) = (b,a) = T((0,1))$. In this case the image of the map is one-dimensional.
Now the rank-nullity theorem says that if $T: V \to W$ is a linear map, and $V$ is an $n$-dimensional vector space, then $\dim(\ker T) + \dim(\text{im } T) = n$. In the case of our example, $n = 2$. Now if $T$ is any transformation which maps $\mathbb R^2$ onto a line, we have $\dim(\text{im } T) = 1$, and hence $\dim(\ker T ) = 1$. But of course there are infinitely many lines through the origin, so we can find infinitely many linear maps with the same kernel dimension.
Similarly, we can find infinitely many distinct linear maps with kernel dimension zero: these correspond to different choices of linearly independent $w_1, w_2$. However, there is only one linear map with kernel dimension $2$, because we'd have to choose $w_1 = 0 = w_2$.
I believe this answers your question and hopefully gives you some insight about kernels. What I want you to take away from this is that $\dim(\ker T))$ is a very coarse piece of information about $T$: it tells us roughly what $T$ does, namely it tells us what dimension the image of $T$ has, but it by no means determines $T$.