My book says if your sample size n is fixed. Then if $\alpha$ is your significance level (Type I Error probability). And $\beta$ is the probability of making a type II error. Then a smaller $\alpha$ leads to a larger $\beta$ and a smaller $\beta$ leads to a larger $\alpha$ so it's impossible to minimize both type I and type II errors at once.
My question is why? My book gives me no explanation so I would appreciate it if someone would help me understand it. This is for a basic stats class if this info helps.
Suppose an example where you have simple hypotheses $H_0:\theta=\theta_0, H_1:\theta=\theta_1$ and that the test rejects if $x<x_0$ (shown as vertical line in picture). Then the prob of type I error is $\alpha=P(\text{reject $H_0$|$H_0$})$ which is the black area, and the prob of type II error is $\beta=P(\text{fail to reject $H_0$|$H_1$)}$, which is the grey area. If you move the rejection region to decrease the prob of type I error by making $x_0$ smaller, you can see that you will increase the prob of type II error, and vice versa. Therefore these two goals tend to work against each other. One option is to fix a level $\alpha$ and then find a $\beta$ that is minimum.