In a $t$-test for difference in means, the null is usually specified as $\mu_X-\mu_Y = 0$ and the alternative hypothesis is EITHER $\mu_X-\mu_Y>0$, or $\mu_X-\mu_Y<0$. In any testing, we normally find the observed $t$-statistic, which we call $t^*$, and then compared it using the p-value, defined as:
$$ P(t^*>t|H_0) $$
where it is defined to be the probability of observed a $t$-statistic as extreme as the one we observed from the data, GIVEN we are under the null distribution. This p-value corresponds normally to the right-tailed test, where the alternative is $\mu_X-\mu_Y>0$. For the case where $\mu_X-\mu_Y<0$, our p-value then becomes $P(t^*<t|H_0)$.
My question then is: why does $\mu_X-\mu_Y>0$ translate to finding the probability of the $t$-statistic being greater than some observed $t$ under the null distribution and vice versa? I understand that the statistic itself, $\mu_X-\mu_Y>0$ has direction, but why does it translate to $t^*$ having that direction too? Is it by the fact dividing by the standard deviation is dividing by something positive and hence directions are preserved?
Meaning, the statistic $\mu_X-\mu_Y$ is invariant in terms of sign due to the standard deviation being a positive value?
The P-value is the probability of a value of the test statistic as or more extreme that what was observed--in the direction, or direction(s) of the alternative. Right tail for alternative $>,$ left tail for alternative $<,$ both tails for alternative $\ne.$
Here is output for left- and two-tailed tests, done in Minitab. Same data, same null hypothesis, different P-values. Left-sided is significant at 5% level, two-sided is not.
In the following plot the P-value for the two-sided test is the area outside the two vertical lines (left line for observed $T,$ right dotted line just as far from 0 in the opposite direction).