My understanding of the bootstrap is that it gives us a method to understand the distribution of an estimator applied to a dataset.
I've read statements of the form "bootstrapping relies on the closeness of the empirical CDF for a sample of size $n$ to the true CDF".
But I wanted to understand the implication of using the bootstrap in a simple case.
Suppose I have a dataset of $N$ Bernoulli trials, with $n$ successes. I want to have an understanding of my uncertainty in $p$, the probability of success.
My understanding is that the bayesian approach to this would give us a pdf for $p$ of $$ P(p|N, n) = \frac{p^{n+ \alpha -1} (1-p)^{N-n+\beta-1}}{B(n+ \alpha, N-n+\beta)} $$ Where $\alpha$ and $\beta$ define the prior.
Naively, I guessed that maybe using the bootstrap on this type of data might give the same answer as a Haldane prior, of $\alpha=0, \ \beta=0$, since if $n=N$, both this and the bootstrap would require that $p=1$.
But when I wrote down what the bootstrap would predict for probabilities of $k$ successes, I get $$ P(p=k/N) = {N \choose k} (n/N)^{k} (1 - n/N)^{N-k} $$
This seems to take a totally different form than the Bayesian answer.
How should I understand both of these approaches? Am I totally misunderstanding the interpretation of bootstrapping in this case? Is there some Bayesian prior secretly implied by the use of the bootstrap in this example?
Using the posterior distribution you've estimated using the bayesian approach, you can estimate a Credible Interval. The equation for the bootstrap which you wrote doesn't make sense to me though. Typically the bootstrap is used to estimate a Confidence Interval. This is done by resampling, rather than by computing in closed form.