I came up with the proof in the paragraph below. My question is about how I expressed the proof, and about the first part of the question above.
For one, my proof seems to me very wordy compared to proofs in my textbook or shown by my professors, so I'd appreciate input on how to express it in a more formal way. Also, I haven't shown that $d$ (where $d = \gcd(a, 0)$) exists and I don't see how I'd do so.
PROOF: Suppose $d = \gcd(a, 0)$, where $d$ is an integer. Then $d \mid a$ and $d \mid 0$. As every integer divides $0$, $d$ will be the largest divider of $a$. The largest divider of any integer is itself. However, $a$ may be negative and $d$, by definition, is greater or equal than zero, so $d = |a|$.
I appreciate any answers. Thanks!
You can use the Bezout related definition of the gcd:
Understanding the Existence and Uniqueness of the GCD
$$\gcd(a,b)=\min\,\{au+bv\mid au+bv > 0,\, (u,v)\in \mathbb Z^2\}$$
For $b=0$ you get (for $a\neq 0$)
$\gcd(a,0)=\min\,\{au\mid au>0,\, u\in \mathbb Z\}$ and it is clear that $u=\pm 1=\operatorname{sgn}(a)$
Making $\gcd(a,0)=|a|$ as a result.
To go further, there are factorial rings where the notion of sign (positivity or negativity) is not defined (think about complexes for instance).
In this case the gcd is defined down to a multiplication by a unit, where units are all divisors of $1$.
So it is just a convention for integers that we take $d>0$.