I'm reading Computation Complexity: A Modern Approach and one of the exercises is:
Prove that [the addition function is] computable by writing down a full description (including the states, alphabet, and transition function) of the corresponding Turing [machine].
I successfully wrote this description but I don't think it is a proof the addition function is computable.
According to this question, the definition of a computable function is:
If $f:\Sigma^* \to \Sigma^*$ is function, and $\exists$ a Turing machine which on the input $w\in\Sigma^*$ writes $f(w)$, $\forall w\in\Sigma^*$, then we call $f$ as computable function.
With the Turing machine I wrote, I can prove the machine is able to compute $f(w)$ for some inputs but how can I prove it can do the same for every inputs $w\in\Sigma^*$? In other words, how can I prove $f$ is computable?
If you can prove that for any two integers $m,n$ the output of your machine is $m+n$, you are done.
You are in a situation like this one: prove a theorem. If you find a proof, obviously your proof may be wrong and someone maybe able to correct your error.
Your Turing program may be wrong, but validating it in Mathematical logic won't go through testing it with various values... but by convincing you (and the others) that your Turing program is the right one.
Additional elements
According to Church-Turing thesis, every computable function can be computed by a Turing machine and there are ways to convert a $\mu$-recursive function into a Turing machine. Based on those methods (that you have to validate!) you can convert the addition which is primitive recursive into a Turing machine.