If we have a beta-geometric distribution $X$ with pdf
$$P(X= k\mid\alpha, \beta) = \frac{Beta(\alpha+1,k + \beta)}{Beta(\alpha, \beta)}$$
Sources: link to NIST
Answer in a different forum without explanation: alt_forum
Edit: Actually, I realised the solution is to look at the definition of $P(\cdot)$ more closely. Using as reference ( Muprhy, "Machine Learning, A Probabilistic Perspective", p. 78, equation 3.31) and replacing the binomial distribution with a geometric distribution $Y$ starting at $0$ i.e. $Y = Geom(k| \theta)$ we get:
$$P(X> k\mid\alpha, \beta) = \int_{0}^{1}P(Y>k)Beta(\theta|\alpha, \beta)d\theta = \int_{0}^{1}(1-\theta)^kBeta(\theta|\alpha, \beta)d\theta = \int_{0}^{1}Beta(\theta|\alpha, \beta + k)d\theta = \frac{Beta(\alpha, \beta + k)}{Beta(\alpha, \beta)}$$
The last equality just follows from integrating the beta distribution. Also note that $Beta(\cdot|\cdot)$ is the probability density, while $Beta(\cdot)$ is the beta function.
To avoid confusion you should not use $\beta$ for both the parameter and the beta function. I suggest using $B$ for the beta function.
Further $\mathsf P(k; \alpha, \beta)$ is a probability density function. $X$ is a continuous random variable realised within the support of $[0;1]$ . It certainly is not equal to its own pdf.
So, the claims you make at the very beginning of your chain are complete absurdities. $$\mathsf P(X>x)~{=~\mathsf P(X>x-1)~\mathsf P(X>0) \\ = ~(1-t)^{x-1}~(1-\mathsf P(X=0))}$$
So the reasoning that follows from there is invalid because it is based on an unjustifiable premise.