Hypothesis Testing: Avg Basal Temperature Raised Question

793 Views Asked by At

After googling this question, I couldn't find any advice that was actually free, so I wanted to ask here. It's a review question and I'm trying to prepare for my finals next week. I'm just wondering if my steps for this problem were correct.

The question is: A medical scientist believes that the average basal temperature of (outwardly) healthy individuals has increased over time and is now greater than 98.6 degrees Fahrenheit. To prove this, she has randomly selected 100 healthy individuals. If their mean temperature is 98.74 with a sample standard deviation of 1.1 degrees, does this prove her claim at the 5 percent level? What about the 1 percent level?

I've only attempted the 5% portion since I'm not sure if I'm correct. Nonetheless, here I go:

The population mean is: 98.6

n = 100 people in the survey

average temperature of those people: 98.74

std dev: 1.1

The level of significance(?) is .05. Therefore, the value from the table (z-dist table) would be 1.96.

Based on the values, I used this formula:

z = |x-bar - pop. mean| / (stdDev / sqrt(n) )

After plugging in the values, I got:

p-value = P(hypothesis)( z > 1.27)

Now I do a check: 1.27 > 1.96, which is not true. Therefore, we can't reject the hypothesis that the average basal temperature has risen.

Is this the correct line of reasoning? If not, then could someone please tell me where I went wrong? Explanations are always nice.

1

There are 1 best solutions below

2
On BEST ANSWER

First, state the hypothesis: $$H_0 : \mu = \mu_0 \quad \text{vs.} \quad H_a : \mu > \mu_0,$$ where $\mu_0 = 98.6$ is the hypothesized mean of the population. This is a one-sided test because we are only interested in rejecting the null hypothesis if the data furnishes sufficient evidence to conclude that the mean temperature has increased, not decreased.

Next, state the distribution of the test statistic under the null hypothesis. The test statistic is the studentized sample mean: $$T = \frac{\bar x - \mu_0}{s/\sqrt{n}} \sim t_{n-1},$$ where $\bar x$ is the sample mean, $s$ is the sample standard deviation, $n$ is the sample size, and $t_{n-1}$ is the student $t$ distribution with $n-1$ degrees of freedom. The use of the $t$ distribution is due to the fact that the population standard deviation is not known, hence is itself estimated from the sample. Since $n = 100$ is quite large, there is very little difference in practice between the above test statistic compared to the standardized sample mean $$Z = \frac{\bar x - \mu_0}{\sigma/\sqrt{n}} \sim \operatorname{Normal}(0,1).$$ Now if we calculate the above (studentized) statistic, what is the critical value against which to compare it? That is to say, under what circumstances would we reject $H_0$ at the $\alpha = 0.05$ significance level? Since the test is one-sided, this means that we reject for a test statistic whose value exceeds the $95^{\rm th}$ percentile of the $t_{99}$ distribution (i.e., we are willing to accept a $5\%$ chance that the test statistic exceeds $t_{99,0.05} \approx 1.66039$, and thereby incorrectly rejecting the null hypothesis, when in fact the true mean is $98.6$).

Thus, at the $5\%$ level, the sample obtained still provides insufficient evidence to suggest that the true mean has exceeded $98.6$. This should also be clear from a cursory glance at the data: a difference of $\bar x - \mu_0 = 0.14$ degrees is only slightly larger than the standard error of the mean $s/\sqrt{n} = 0.11$, whereas at a one-sided $5\%$ level, you'd need close to 2 standard errors difference.

Finally, if you fail to reject at $\alpha = 0.05$, you would also certainly fail to reject at $\alpha = 0.01$, which is a more stringent criterion for Type I error.