In constructing a confidence interval for a sample (n = 40 and data points are 5-digit numbers ranging from 30000 to 90000), I can either use the t-table in the back of the book to find the margin of error and calculate it by hand, or plug the data set into a computer program. The results of these two processes kept differing seemingly significantly, and I finally realized it's because the computer program was using a critical t-value of 2.576 for a 99% confidence interval, whereas when I used the t-table, a 0.99 confidence level with 39 degrees of freedom produced a t-value of 2.708. Are there any reasons why these two values could be different, and/or what I could be doing wrong in my calculations?
2026-02-26 02:36:02.1772073362
On
Why is the t-value generated by a computer program different than the one the t-table lists?
56 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
2
There are 2 best solutions below
0
On
A critical value of $2.576$ corresponds to a sample size of $\infty$. (You can find this either on a z-table, or the very last row of the t-table.) So it looks like the computer program is not using a t-statistic-related critical value, but rather a z-statistic. I think it happens when the population standard deviation is known.
By the way, it looks like your t-critical value is around right, i.e. $t_{39, .995}\approx 2.708$.
I recommend figuring out if you want a t-critical value or a z-critical value based on the above comment.
The exact value is $$t_{39,0.995} = 2.7079131835176620992\ldots.$$ On the other hand, $$z_{0.995} = 2.5758293035489007610\ldots.$$ So whatever computer program you are using, it is not doing what you think it's doing. When you are entering in your data, it is performing a $z$-test, not a $t$-test.