So the theorem is
Let $T : V \rightarrow W$ be a linear transformation. Prove that if $B = \{v_1,\cdots,v_n\}$ is a basis for the domain $V$ then $S = \{T(v_1), T(v_2), \cdots , T(v_n\}$ is a basis for the range $T$.
The proof I came across: (Note that in the proof we assume that we already know $T(v_1),\cdots, T(v_n)$ span $range(T)$
When proving linear independence of $S$, he only considers the case
$$T(O) = O$$
What about the case where $b \in V$ and $b ≠ O$, but
$$T(b) = O$$?
Clearly, we can rewrite $b$ as the linear combination of vectors in $B$
$$b = \sum_{i=1}^{n}c_iv_i$$
And after substituting and rearranging, we have following
$$\sum_{i=1}^{n}c_1T(v_1) = O$$
where not all scalars $c$ are zero.
For the proposition to be correct, this case must be impossible. But why is so?
So in summary:
Question 1
Is the proof above correct? If it is so, why he didn't consider the case where $b \in V$, $b ≠ O$ but $T(b) = O$?
Question 2
Is there an alternative way to prove the theorem? (Preferably I would like see the proof that would start like : "Suppose $v_1,\cdots,v_n$ form a basis. Consider linear combiation $\sum_{i=1}^{n}k_iT(v_i) = O$. Then we have......")

This "theorem" is false, and the "proof" is nonsense for exactly the reason you pointed out.
However, there is an actual (true) theorem which is reminiscent.
Theorem. Let $T:V\to W$ be a linear map between vector spaces $V$ and $W$. Then the following are equivalent.
(i) For any $\mathcal{B}\subseteq V$, we have that $\mathcal{B}$ is a basis for $V$ if and only if $T(\mathcal{B})$ is a basis for $T(V)$.
(ii) $T$ is injective.
(iii) $\mathcal{N}(T)=\{0\}$.