This question has 2 parts:
(1) What is the fundamental difference between classical and bayesian hypothesis testing? How do I interpret this difference.
(2) Here is a paragraph quoted from Casella and Berger Statistical Inference (Section 8.2):
I don't understand:
(i) Why is P(H0 is True | X) = {either 0 or 1} ?? --- if I toss a coin I know that I'll get either heads or tails but I do not say that the probability of getting heads is 0 or 1 if the outcome is unknown.
(ii) Why do these probabilities not depend on X?

With the coin tossing, the quantity of interest in the distribution is the probability the outcome is heads/tails. This is a fixed and constant number in the classical paradigm. For a fair coin, this probability is $0.5$ irrespective of whether you get an actual heads or tails (i.e. irrespective of the data X) and $P(H_0: p=0.5|X)=1$.