Let $D$ be a continuous distribution on the interval $[0,1]$ that is not known to us. We have no prior knowledge about $D$. For a given error tolerance $\delta$ we want to find bounds $a, b$ s.t. $b-a$ is minimized while for a random variable $X \sim D$ we have $Pr(a\leq X\leq b)\geq 1-\delta$.
To do this we sample $X_i \sim D$ for $i=1,\dots,n$ and view it as a Bernoulli trial where a successs is characterized by $X_i$ being within the bounds. By $k$ we denote the number of successes, i.e., $k=\lvert\{X_i \mid a\leq X_i \leq b \}\rvert$. Then, for $X \sim D$ and $f$ being the pdf of $D$ we can say
$$Pr(a \leq X \leq b) = I_b(k+1,n-k+1) - I_a(k+1,n-k+1)$$
where $I_x$ is the regularized, incomplete Beta function.
Now I understand that this procedure works fine when working out $Pr(a \leq X \leq b)$ given $a,b$ and all $X_i$.
My question is whether it is valid to apply the reverse here, i.e., given a bound for $Pr(a \leq X \leq b)$ and all $X_i$, can we find bounds $a$ and $b$?
A similar question can be found for example in this thread, but in my case specifically the difference is that adapting $a$ and $b$ directly affects the success probability of the Bernoulli trial.
Is it still valid to simply choose/find $a$ and $b$ s.t. $Pr(a\leq X \leq b) \geq 1-\delta$ is fulfilled, or do I need to formulate a new hypothesis and collect new data every time I change $a$ and $b$?
There are a couple of major problems that invalidate your approach: (1) It makes very unreasonable assumptions about the prior distribution for the unknown $p:=P(a < X < b)$, and (2) It confuses the posterior distribution of this quantity with the quantity itself, leading to nonsensical results (such as the equation you wrote involving the Beta c.d.f.).
As the problem is formulated, there are $n$ Bernoulli trials -- call them $Y_1,..,Y_n,$ where $Y_i=1_{a<X_i<b},$ for i.i.d. $X_1,..,X_n\sim D$ -- and the number of "successes" is $K:=\sum_{i=1}^nY_i.$ The Beta-Binomial model is then $$\begin{align}(K\mid p)&\sim\text{Bin}(n,p)\tag{1}\\ p&\sim\text{Beta}(\alpha,\beta)\tag{2}\\ \\ (p\mid K=k)&\sim\text{Beta}(\alpha+k,\beta+n-k)\tag{3} \end{align}$$ where $$p:=P(a<X<b),\quad X\sim D.$$
You've assumed a fixed prior ($\alpha=\beta=1$) for $p:=P(a<X<b)$ irrespective of $a$ and $b$ -- which seems not at all reasonable.
For given constants $A,B$ (unrelated to $a,b$), the posterior (3) implies
$$P(A<p<B\mid K=k)=I_B(1+k,1+n-k)-I_A(1+k,1+n-k).$$ In particular, the model does not imply this equation: $$P(a<X<b\mid K=k)=I_b(1+k,1+n-k)-I_a(1+k,1+n-k).$$