I took an online probability course a while back and ran into a textbook example in which a bank needed to calculate what interest rate would yield a $1$% probability of taking a loss on the totality of money they loaned out due to defaults. The interest rate was $2$% on $1000$ loans of $\$180000$ each with a loss of $\$200000$ per default.
Until recently, I believed this to be an example of hypothesis testing, due to several serious points of contact, and attempted to fit $Z$-testing problems into its solution framework, but it didn't seem to harmonize. It's solution started with...$$Pr(S\ <\ 0)\ =\ .01$$...which led me to believe $S\ <\ 0$ was either the null or alternative hypothesis and $.01$ was the significance level. Using algebra, it transformed the above to ...$$Pr(\frac{S - E[S]}{SE[S]}\ <\ \frac{-E[S]}{SE[S]})\ =\ .01$$...then observed that this means...$$\frac{-E[S]}{SE[S]}\ =\ qnorm(.01)$$...and plugged in all the appropriate expressions. Just in case I left out any important details, the complete problem was...
In addition to the hypothesis and significance level, subtracting $E[S]$ and dividing by $SE[S]$ also seems related to the test statistic for $Z$-testing on a mean, as does consulting the Gaussian's Cumulative Distribution Function.
Am I correct that this is not quite the same as hypothesis testing? Is it perhaps an inverse operation to $Z$-testing? Does something like the above method provide a general way to derive the test statistic for at least some broader class of hypothesis testing problems (granted we may still need a general way to relate the standard error to the standard deviation to completely eliminate the need for memorization)?
