In this question, i can work out that the posterior is supposed to be a Beta (r+1, n-r+1) distribution.
However, what I am struggling with is how to compute f(X_n+1|theta). Is this the binomial distribution with r+1 replacing r, because it's the probability of achieving r+1 heads in X_n+1 flips?
Or is it simply theta, because it's the probability of seeing an (r+1)th head in 1 extra flip, having observed r heads in n flips?
Obviously depending on which is the correct f, the answers vary significantly.
Any help would be greatly appreciated! Thanks a lot

Taking a Bayesian approach one considers the results on the first $n$ tosses to provide information about the Heads probability $\theta.$ So if you got $r = 400$ heads in $n = 1000$ previous tosses, your posterior distribution would be $\theta \sim Beta(401, 601).$ Taking the posterior mean as the success probability for toss $1001$ you would have $E(\theta) = 401/1002 = 0.4002.$ Some would prefer to use the posterior mode $400/1000 = 0.4000$ or the posterior median $0.4001,$ but it doesn't much matter. In any case, the Bayesian approach would predict $P(Heads_{1001}) \approx 0.4.$
The reason that Bayesian modeling is used ever more frequently in polling is that past experience can inform future predictions. Often the posterior from the last poll helps to formulate the prior for the next.
Some would argue that you have to take a realization $\theta$ at random from $Beta(401, 601)$ and then toss a coin with that Heads probability. The following R code performs that experiment a million times, with the expected result. (In more complicated Bayesian inference models the result is not obvious, and numerical or analytical integration is necessary.)
In (b), for the probability of 3 Heads in 5 extra flips the probability is 0.2306:
There is an old joke question about how to distinguish among a probabilist, a statistician, and a fool: I have just tossed a coin 20 times and gotten 20 Heads in a row. What are the chances of a Head on the next toss? The probabilist, persistently faithful to a belief in fair coins says 1/2. The statistician (especially a Bayesian one) says he/she is betting it'll be Heads again because the coin shows signs of bias. The fool says it's about time for Tails to come up.
Note: Some reputable experimenters claim the all coins are essentially fair, so let's assume the 'coin' in this Question is really a die with H on three faces and T on the other three. It is pretty well established that one can 'load' a die to be unfair. (Standard dice have a corner where faces 1, 3, and 5 touch; putting a heavy weight just inside the opposite corner biases the die toward odd outcomes.)