During the lecture my professored mentioned something like "cancellation is perfectly fine in a ring when dealing with addition, but not with multiplication!". The example he gave was that, in $\mathbb{Z}_6$, $[3]\times[2]=[3]\times[0]=[0]$, but obviously $[2]\neq[0]$. I get that part.
But why cancellation in addition is valid? I don't quite get that. To give an example, when proving that $f(0)=0$ if $f$ is an isomorphism, we have something like $f(0)=f(0)+f(0) \implies f(0)=0$. Why can we just move one $f(0)$ to the other side?
I was suspecting that is because we can always add $-f(0)$ to both side of the equation and then that would let the left hand side equal to zero. But then the question becomes "why can we add the same thing on both side of the equation it still holds?"
Anyone can help me with this? Thanks!
Your suspicion is correct: you can add $-f(0)$ to both sides, and after applying some ring axioms, the $f(0)$ disappears. You can't do this for multiplication because there aren't necessarily inverses.
As for "why can we add/subtract/multiply/etc things to both sides of the equation", it's just a property of functions. For example, let's show that $a = b \implies a + c = b + c$. Define $f(x)$ to be $x + c$. Since $a = b$, we know that $f(a) = f(b)$, because functions are single-valued. So $a + c = b + c$.
You don't really need to define $f$ to do this, since $+$ is a well-defined function already, but doing so might make it clearer to you.