In the following proof, what does "But the expansion of a vector relative to a basis is unique" mean?
Let$$T:V\to W$$ be a linear transformation. Then $$dim(Im(T))+dim(ker(T))=dim(V)$$ $\mathbf{Proof:}$
Let $dim(Ker(T)) = k$. We start by choosing a basis$$\{v_1,v_2,...,v_k\}$$ for the kernel.
If $k = 0$, then $Ker(T) = \{0\}$ and the basis is empty, so the vectors $v_1, . . . , v_k$ do not appear in the rest of the proof.
Let $dim(V) = n$. If $n = k$, then every vector in V is in the kernel and so $Im(T) = \{0\}$ has dimension 0. Then there is nothing to prove.
If $n>k$ then we can add an additional $n − k$ vectors to the basis for $Ker(T)$ to get a basis for V: $$\{v_1, . . . , v_k, v_{k+1}, . . . , v_n\}$$
We claim that $$\{T(v_{k+1}), . . . , T(v_n)\}$$ is a basis for $Im(T)$.
Proof that this is a spanning set for Im(T):
Every vector in $Im(T)$ has the form $w = T(u)$ for some $u ∈$ V. We can expand $u$ using the basis for $V$: $$u = c_1v_1 + · · · + c_kv_k + c_{k+1}v_{k+1} + · · · + c_nv_n$$ Since vectors $v_1, . . . , v_k$ are in the kernel of $T$, this gives $$w = T(u) = c_{k+1}T(v_{k+1}) + · · · + c_nT(v_n)$$ So every vector in $Im(T)$ is a linear combination of $T(v_{k+1}), . . . , T(v_n)$.
Proof that this is a linearly independent set:
Suppose we have $$a_{k+1}T(v_{k+1}) + · · · + a_nT(v_n) = 0$$
Then, since T is linear, $$T(a_{k+1}v_{k+1} + · · · + a_nv_n)= 0$$
So the vector $a_{k+1}v_{k+1} + · · · + a_nv_n$ is in the kernel of $T$.
Since ${v_1, . . . , v_k}$ is a basis for the kernel, it must be possible to write $$a_{k+1}v_{k+1} + · · · + a_nv_n = b_1v_1 + · · · + b_kv_k$$
But the expansion of a vector relative to a basis is unique, so all the $a_i$ and all the $b_j$ must be zero.
Therefore $${T(v_{k+1}), . . . , T(v_n)}$$
is a basis for $Im(T)$.
Hence $$dim(Im(T)) = n − k$$ And $$dim(Ker(T)) = k, dim(V) = n$$
which shows that $$dim(Im(T)) + dim(ker(T)) = dim(V)$$
EDIT:
If
$$a_{k+1}v_{k+1} + · · · + a_nv_n = b_1v_1 + · · · + b_kv_k$$
is possible for some constants, then that means
$$-b_1v_1 - · · · - b_kv_k+a_{k+1}v_{k+1} + · · · + a_nv_n = 0$$
is possible for some constants. But that cant be the case as $v_1,...,v_k,v_{k+1},...,v_n$ form a basis for $V$ and thus no non-zero constants should satisfy the equation.
Is that correct?
It means that for a fixed basis $\{v_1,\dots,v_n\}$, there is only one way to express a given vector $u$ as a linear combination of the $v_i$. In other words, the basis uniquely determines the coefficients in $u=\sum_i c_iv_i$.
So, if you move all of the terms to one side of the equation, you get an expansion of the zero vector in terms of the chosen basis of $V$, but there’s only one such expansion, for which all of the coefficients are zero.
Frankly, although this part of the argument is valid, I don’t really like it. I think that your use of linear independence of the $v_i$ to reach the same conclusion is more direct (and more commonly used). It simply uses the definitions of basis and linear independence instead of relying on some other result that is itself a consequence of those definitions.