Statistical test for difference-in-differences obtained from initial t-tests

115 Views Asked by At

I am performing a two-step analysis to ultimately compare the differences in between-group responses between two time periods. Below is an illustration of the process I am looking to achieve.

difference in difference process

In the first step, I have found that Group A (red) and B (blue) have significantly different responses, both in 1990 and 2016.

For the second step, I want to compare the mean differences in group responses between 1990 and 2016. However, I do not have distributional data on these differences--merely the single mean, SD, and SEM generated by the previous tests.

What kind of statistical test would allow me to determine if the differences between years is significant? Ideally, I would use a repeated measures test (or rather the non-parametric alternative, Wilcoxon), but I currently cannot find a way to get a distribution of differences in which to pair/rank.

Perhaps there is a post-hoc test that will do this?

1

There are 1 best solutions below

0
On

For 1990 I take it that you have sample mean of differences $\bar X$ and standard deviation of differences $S_x$. Similarly, for 2016 you have $\bar Y$ and $S_y.$ I suppose that SEM in each case is the sample SD divided by the square root of sample size, so you can find (or very nearly approximate) the sample sizes, if you don't already know them.

Then you can do a Welch (separate-variances) t test to compare $\bar X$ and $\bar Y:$

$$T = \frac{\bar X - \bar Y}{\sqrt{\frac{S_x^2}{n_x}+\frac{S_x^2}{n_x}}}.$$ The terms in the denominator are the squares of the SEMs.

If both sample sizes are 30 or larger you can reject at the 5% level for $|T| > 2.$ For smaller sample sizes or levels other than 5%, you will need to use the Welch formula for the degrees of freedom of the approximate t distribution, and use that to get the critical value. (See a basic statistics book or google.) For that you will need the sample sizes and the sample variances.