Regarding the distributions of random variables, I understand the following points:
The distribution of the expected value (i.e. "mean") of any random variable is (asymptotically) normally distributed - this is stated within the Central Limit Theorem
The sums and differences of normally distributed random variables are also normally distributed
This being said, now consider the popular "T-Test" . The T-Test can be used to determine if the difference between the "mean value" of some random variable from two samples are equal or not - in this case, we can consider this "value" as a random variable. And since we know that the difference in the "mean values" of any two random variables follows a normal distribution, the T-Test exploits this fact and thereby uses the normal distribution to determine the mean differences between two samples is statistically significant or not.
Something I have never quite been able to understand: In a T-Test, we are told that we require the underlying distribution of both samples to be normally distributed - yet, the difference between the mean values of ANY two random variables is normally distributed.
My Question: Thus, why is the assumption of normality required in a T-Test when we know that the difference between the mean values of any two random variables is always normally distributed (provided there are enough observations)?
I can understand why this might not be the case when we have very few observations in each sample - but when we have many observations in both samples, why is the assumption of bormality still required for the T-Test?
Thanks!

My understanding is that the normality assumption is not really required.
As you point out, the Central Limit Theorem helps here, but the real issue is how much it helps. This depends on how much you're violating the normality assumption and how much data you have. Maybe you have enough data---but maybe you don't.
Other discussions about this issue from CrossValidated: 1, 2, 3, 4, 5, 6.