I'm wondering if someone could explain in high-school-level language what the difference is between these concepts?
I'm looking at the results of an OLS regression for a single categorical variable (advertising budget: low, medium, high) against a target variable that represents sales ($).
The OLS results show the following:
| coef | 95% confidence interval | |
|---|---|---|
| intercept | 300 | 295.92 305.79 |
| low | -209 | -216.53 -203.2 |
| medium | -105 | -112.13 -98.86 |
Then I have the results of a Tukey's HSD test:
| group 1 | group 2 | mean diff | 95% confidence interval |
|---|---|---|---|
| high | low | -209 | -217.84 -201.89 |
| high | medium | -105 | -113.43 -97.56 |
| low | medium | 104 | 96.83 111.92 |
The coefficients from the OLS model are exactly identical to the mean differences from the Tukey's test. However, they both claim to have a 95% confidence in their given intervals, but the intervals are different.
- Is there a theoretical difference between the coefficients in an OLS model and the mean differences in a Tukey's HSD test?
- What is the difference between these two results?
- How do I interpret that difference?
- Is one way better than the other?
Thank you!