Hi. I'm confused by 'significance level'. If significance level is lower, is the result of test more reliable? I think it's the opposite, so it makes me so confused.
Look at the picture and assume that $H_0$ is null hypothesis, $\alpha$ is the 'significance level'. Let's test this, by sample size is 40, as a result sample mean is $\mu_1$. If the given 'confidence level'($1-\alpha$) is 95%, z value of 'c' would be 1.96. If given 99%, 'c' would be 2.56
Here is the problem. If confidence level is 99%, is the result more reliable than confidence level 95%? If c level is 99%, more error would be included in null hypothesis area, than level 95%. if c level is 99%, it means '99% of deviation would be regarded as in the null area anyway. So actually, c level 99% means it's less precise than 95%, right? Rather, if confidence level is very small like 5%, so the 'c' value goes closer to $\mu_0$, it would mean that we will regard more values to be error. So it must be more precise.
Am I wrong? The textbook seems to say i'm wrong. And hope you genius explain it.. Thank you!

We should know that 'test' is focused on if we can 'reject', rather than 'adopt'. So the Confidence Level is about the accuracy of rejection, not adoption.
So if we raise confidence level from 95% to 99%, the rejection area becomes smaller. And if the test result is in the rejection area though, we can more confidently reject the null hypothesis. It can be more reliable than rejection from 95% confidence level, because 95% CL has wider rejection area, thus more possibility to 'not reject' wrong fact.
Seems it's clear now