What are hypothesis tests for in statistics?

50 Views Asked by At

A manufacturer produces a component for car engines. It is known that the diameter of this component is approximately normal with a stdev of 0.0005 milimetres. In a random sample of 15 components the average diametre was 74.036mm.

a) For the hypothesis $H_0 : \mu = 74.035$ and $H_1 : \mu \neq 74.035$ determine the acceptation region and the decision to take for a significance level of 1%

enter image description here enter image description here enter image description here

(in red: "Reject H_0 with a significance of 1%")

What is this trying to achieve? Are we trying to "guess" the average of the population? Why? Do we reject it because $7.74597$ isn't within the bounds in the picture? Is the guess more accurate the smaller the significance? If so what is the difference between this and confidence intervals?

1

There are 1 best solutions below

0
On

What is this trying to achieve? - A hypothesis test is designed to test wheter we should believe a given hypothesis or not. In this case we are told, that we should not believe the hypothesis.

Are we trying to "guess" the average of the population? Why? - No, we are not trying to guess anything. We are testing wheter it can be assumed that the population mean is exactly $7.035$ mm. The why depends on the particular problem. Maybe the manufactorer needs the components to be as close to $7.035$ mm as possible in order to minimize the risk of engine failures?

Do we reject it because $7.74597$ isn't within the bounds in the picture? - Yes, because that is how the test is defined.

Is the guess more accurate the smaller the significance? - As said before, there is no guess and we cannot speak of it's accuracy. Having small significance means that the acceptance region gets wider, such that we require stronger evidence to reject the null hypothesis.

What is the difference between this and confidence intervals? - Well the hypothesis test simply answers a single question; should we believe $H_0$ or not, while confidence intervals gives a range of values in which the true parameter is contained with some level of confidence. There is however a relation between the t-test and the t-confidence interval, namely that we should accept $H_0:\mu = \mu_0$ (alternative $\mu \neq \mu_0$) if and only if $\mu_0$ is contained in the $(1-\alpha)\cdot 100%$% confidence interval for $\mu$.