99% of variables, constants, etc. that I run into are named for either a Latin character (like $x$) or a Greek character (e.g. $\pi$). Sometimes I twitch a little when I have to keep two separate meanings for a symbol in my head at once.
Back during the Renaissance and prior, I can see why this would be the case (everybody who set today's conventions for notation spoke Latin or Greek). But it's 2016; thanks to globalization and technological advancements, when symbol overloading becomes an issue one could simply look at the Unicode standard (e.g. through a character map or virtual keyboard) and pick their favorite character.
So why do we (mostly) limit our choice of symbols to Latin and Greek characters?
It's in part for historical reasons and it's in part for practical reasons. You have already figured out the historical reasons.
One practical reason is that notation is meant to convey ideas, it's not meant to be used for its own sake. If instead of asking whether your ideas are correct your readers get stuck on whether your notation is correct, or worse, what your notation even means, then the notation has failed.
Symbol overloading can be a problem, especially in a long document. Suppose you're writing a book about prime numbers. First you need $p$ and $q$ to be any odd positive primes, then you need them to be any primes in $\mathbb{Z}$ whatsoever, next you need $q - 1$ to be a multiple of $p$, then you need to them to have quadratic reciprocity, and later on still you need them to not have quadratic reciprocity.
You could be tempted to declare $p$ and $q$ are any positive odd primes, ぱ、く are any primes in $\mathbb{Z}$, パ、ク are primes such that one is one more than a multiple of the other, $\hat{p}$ and $\hat{q}$ have quadratic reciprocity, ب and خ don't have quadratic reciprocity. You declare them at the beginning of the book and then use them without further explanation.
I'm sure these could be made a little less arbitrary, but if you have to read such a book, it's gonna get pretty damn tiresome to keep having to refer to the list of symbols at the beginning. It is better, in my opinion, to redefine the symbols at each theorem or other logical division of your document.
It might also help to think of it as somewhat analogous to accidentals in music notation: you have E$\flat$ in measure 20, no E's of any kind for ten bars then an E with no accidental next to it. Is it supposed to E$\flat$? Probably not. If there was only one intervening barline, you could reasonably think the composer actually meant E$\flat$ again. But ten barlines is quite enough to cancel, I think.
So if in Theorem 2.1 the author uses $p$ and $q$ to mean primes with quadratic reciprocity, then pages later in Theorem 2.7 he uses $p$ and $q$ just to mean "primes," I would think quadratic reciprocity is now no longer a requirement.
Font variations can help generate distinct symbols that still bear some recognizable relation to other symbols. $P$ and $Q$ could be specific sets of primes, $\mathcal{P}$ and $\mathcal{Q}$ could be the products of primes in those sets, $\mathbb{P}$ is the set of all primes in $\mathbb{Z}$, $\mathfrak{P}$ and $\mathfrak{Q}$ are ideals generated by those primes, etc.
But even going this route it's possible to get carried away. Better to use a few symbols judiciously than a lot of symbols recklessly.
And one last point: even within the Latin and Greek alphabets, we limit ourselves further. $l$ is barely used. Nor do certain Greek letters get much play here because of their similarity to Latin letters, e.g., compare the uppercase $A$ to the uppercase alpha. The potential for confusion multiplies in the broader Unicode palette: is ム from katakana, bopomofo or CJK unified ideographs? You can't tell just by looking at it.