Suppose we are doing a two-tailed test on a standard Gaussian distribution and $H_0: \mu = \mu_0$. Let the actual underlying $\mu$ be very close to $\mu_0$, but not equal to $\mu_0$.
Now it is a Type II error whenever we accept $H_0$. Setting all other things fixed, as $\mu$ gets closer and closer to $\mu_0$, the probability of getting Type II error grows higher and higher: it's almost certainty when $\mu$ is infinitesimally close to $\mu_0$.
But in my intuition, if the test concludes that $H_0$ is true when $\mu$ is very very close to $\mu_0$, we would not want to call this conclusion an 'error' at all. In fact, it is impossible to formulate a hypothesis such that $\mu_0$ equals to the actual value in infinite precision.
But the definition of Type II error suggests that every acceptances of $H_0$ are an Type II error as soon as $\mu$ is not exactly equal to $\mu_0$, no matter how close they are.
In the empirical world, isn't it that almost all conclusions we draw in the empirical world are Type II error?
Am I interpreting it wrongly? Why is the Type II error rate so important that it is taught in almost all mainstream textbooks?