Using the relation between Beta and Gamma functions $$B(m,n) = \frac{\Gamma(m)\Gamma(n)}{\Gamma(m+n)}$$ where $$B(m,n) = \int_{[0,1]} p^{m-1}(1-p)^{n-1}\operatorname{d}p$$ and the fact that for any $n \in \mathbb{N}$ it holds that $$\Gamma(n)=(n-1)! \;$$ it can easily be proved that for any $s,t\in\mathbb{N}$ such that $s\le t$ it holds $$ \frac{\int_{[0,1]}p^{s+1}(1-p)^{t-s}\operatorname{d}p}{\int_{[0,1]}p^{s}(1-p)^{t-s}\operatorname{d}p} = \frac{s+1}{t+2} \;. $$
These computations arise when we sample Bernoulli random variables in i.i.d. manner according to some random parameter $P$, which is known to be drawn uniformly on $[0,1]$, and we want to compute the expected value of $P$ having observed $s$ successes over $t$ trials, using Bayes theorem.
I wonder whether an analogous formula holds if we know in advance that $P$ is drawn according to a uniform in $[a,b]$, with $0\le a<b\le1$. In this case, the distribution of $P$ having seen $s$ successes over $t$ trials will be $$q \mapsto \chi_{[a,b]}(q) \frac{q^{s}(1-q)^{t-s}}{\int_{[a,b]}p^{s}(1-p)^{t-s}\operatorname{d}p}$$ and hence, its expectation will be $$ \frac{\int_{[a,b]}p^{s+1}(1-p)^{t-s}\operatorname{d}p}{\int_{[a,b]}p^{s}(1-p)^{t-s}\operatorname{d}p} = (\star) $$
I know that, "probably", asking for a closed formula for $(\star)$ is too much.
Instead, what I do hope for is that if $\frac{s}{t} \in [a,b]$ (or maybe even $\frac{s}{t} \in[c,d]$ for some $a<c<d<b$) and $t \ge t_0$, where $t_0$ is a (large) constant, then $$\bigg|(\star)-\frac{s}{t}\bigg| \le \frac{C}{t}$$ for some universal constant $C>0$, as it was the case when $P$ was uniform in $[0,1]$, given that $$ \bigg|\frac{s+1}{t+2} - \frac{s}{t}\bigg| \le \frac{3}{t+2}\;. $$
The idea is that, as long as the empirical mean $s/t$ is in the acceptable range $[a,b]$ (or, if boundary effects are problematic, as long as $s/t$ is in a subinterval $[c,d]$ of $[a,b]$ with $a < c < d < b$) and the sample size $t$ is big enough, we should expect the prior of $P$ to weight less and less and the posterior to converge to a distribution highly concentrated around $s/t$, and hence its expectation should be close to this value.
I haven't managed to find a meaningful relation (such as the one relating Beta and Gamma) to obtain this result and, actually, I'm not sure what could be a viable strategy to prove this conjecture.
Any ideas, hints, pointers to the literature?