There is this standard in much of social "sciences" to use a 5% significance level to determine a significant degree of difference between two samples. That is two standard deviations. It seems A LOT to me (who once played some Yahtzee). So I might have misunderstood the interpretation.
Let's say that a small web shop has tried out two different web page designs. Both have generated the same sample size and standard deviation with respect to the same revenue variable, only their means differ.
Does the difference between their means REALLY has to be an entire two standard deviations apart (in the t-distribution) for there to be a significant difference between the two at the 5% significance level? Is there really still even a 5% chance that the 2 stdev worse one will come out ahead, on a purely mathematical basis as if this was about dice throws?