t-test and anova comparison

925 Views Asked by At

I'm trying to understand the difference between T-test and Anova and wanted to ask:

1) What would be a few concrete experimental situations in which ANOVA could be used to compare the effects of different treatments upon the population being studied?

I'm curious if anyone would happen to have a few concrete ex's of such.

And, what are some of the assumptions and issues involved with both t-tests and ANOVA? And, which assumptions are the procedures robust with regard to and what kind of research hypothesis Ha can be handled by t-testing but not ANOVA?

1

There are 1 best solutions below

0
On

I am assuming that you are familiar with standard ANOVA and t tests. The purpose of this Answer is to illustrate them and point out some respects in which they differ.

Consider 15 subjects, 5 of them randomly assigned to each of three groups. Each group is given a different treatment for lowering blood pressure. Decreases in PB upon treatment are as follows.

A:   0, -1, 10,  3,  5
B:  10,  2, 15,  4, 10
C:  21, 11, 10, 12,  8 

A standard analysis of variance (ANOVA) tests the null hypothesis that all three group population means are equal against the alternative that at least one pair of means differs. Results from Minitab software are shown below. (This method assumes that the three populations are normal and have equal variances.)

Analysis of Variance

Source  DF  Adj SS  Adj MS  F-Value  P-Value
Factor   2   202.8  101.40     4.24    0.041
Error   12   287.2   23.93
Total   14   490.0

The P-value 0.04 < 0.05 indicates that the null hypothesis is rejected at the 5% level of significance.

One might try doing three individual t tests to check whether $\mu_A$ and $\mu_B$ are significantly different, and similarly for pop means of A and C, and of B and C. But if all three tests are done at the 5% level of significance, errors of the three tests might accumulate to give an overall result of unknown reliability. (A 'Bonferroni' procedure would do each t test at the level $0.05/3$ in order to make sure the overall level is below 5%.)

The Tukey 'HSD' method of multiple comparisons does this 'family' of three comparisons with an overall 'family rate' of 5%. It finds a significant difference between $\mu_A$ and $\mu_C$, but not a significant difference when A and B are compared or when B and C are compared. Tukey confidence intervals are plotted in the figure below:

enter image description here

By contrast, if we actually had only two groups, A and C, then a t test would be the appropriate way to compare them. The 'Welch separate variances' t test has P-value 0.02. (Unlike the 'pooled' t test, the Welch test does not assume that populations A and C have equal variances. Both tests assume that the populations are normal.

Two-sample T for A vs C

   N   Mean  StDev  SE Mean
A  5   3.40   4.39      2.0
C  5  12.40   5.03      2.2

Difference = μ (A) - μ (C)
Estimate for difference:  -9.00
95% CI for difference:  (-16.06, -1.94)
T-Test of difference = 0 (vs ≠): 
    T-Value = -3.01  P-Value = 0.020  DF = 7

Notes: (1) If you are comparing data from more than two normally distributed groups, you must use ANOVA. To compare two groups, use a t test. (2) It is correct, but needlessly complicated, to use an ANOVA procedure for a two-sided test on just two groups. (3) Most statistical software will do a version of ANOVA that does not assume equal population variances, but I did not illustrate that kind of ANOVA here. (4) You can study the assumptions, theory, and computational formulas for t tests and ANOVA procedures in most elementary statistics texts and on online (Look at Wikipedia or NISS sites; I'd avoid YouTube, Khan, and some other Internet sources in this particular case.)