Understanding p-value and its relation with the value $\alpha$ used in tests

38 Views Asked by At

I am having a little trouble understanding these concepts. Let’s take a real problem, for instance. I buy some machinery, and I know newer models break down with an exponential probability of mean 5 years, and the old ones have the same kind of probability distribution but the mean for them breaking down is 3 years.

Let’s suppose the machines are of the newer type. It can be seen (rigorously and also intuitively) that given $n$ machines that break down in a time $X_i$, the critical region to look at is something like $\{ \sum_{i=0}^{n} X_i < d\}$. Then let’s say I take a level $\alpha$, that is I want $P(\{ \sum_{i=0}^{n} X_i < d\})< \alpha$. It can be shown that the sum has a distribution $\chi ^2$. The question is, given a value for $n$,$\alpha$ and the sum, if the value of $\alpha$ is little, then the corresponding quantil (value of $d$) will also be little and the value of the sum is more likely to not fall on the critical region, thus making the null hypothesis more likely to be accepted.

What I don’t get is how this is possible. The value of $\alpha$ should represent the risk that if the hypothesis is true, the variable falls in the critical region. How come that I can reject the hypothesis for a value of $\alpha$ and accept it for a smaller one? Shouldn’t littler values of $\alpha$ indicate more precision and thus be more difficult to attain?