Suppose we have a coin and it is biased but the amount of its biasedness is itself a probability and we do not know it exactly. If we flip coin $n$ times and we get $n$ heads, what is the probability that we see the HEAD in $(n+1)$th time of flipping the coin? In other words, can we make sure that based on some probability $\alpha$ of rejecting $H_0$, it will be HEAD again?
I am trying to solve this with Hypothesis testing.
$$\begin{equation*} \begin{cases} H_0 \colon X_{n+1} = H \\ H_a \colon X_{n+1}= T \end{cases} \end{equation*}$$
If I understand correctly, you have a coin with some probability $\alpha$ of showing up heads and some probability $1-\alpha$ of showing up tails, and you want to estimate $\alpha$ based on the first $n$ flips? This would be an estimation problem, not a hypothesis testing problem. There are many ways to develop estimates. The simplest would be just to use the proportion of flips that showed up heads in the first $n$ flips. That would be a consistent, unbiased estimator.