Context
Let $X\hookrightarrow B(N,p)$
Say we observed a single value $X = n$, and we know $N$.
Goal
I am interested in a posterior distribution of $p$ given $X = n$ (and an uninformative prior), i.e. a probability distribution on $[0,1]$ of the value of $p$ given our single observation.
From here, I would like to be able to later to randomly draw a value of $p$ from this probability distribution, and not only use $\hat{p} = n/N$. It is because I would like to reuse values of $p$ in simulations where $p$ can vary, but only in a likely manner given our data (a single observation of $X = n$ successed after $N$ trials). I expect the mode of this distribution to be at $n/N$, and more or less skewed when approaching $0$ and $1$.
Question
How do I obtain such probability distribution $P(\ p \; \lvert X = n)$?
What I considered
At first I was trying to rely on binomial confidence intervals to see, across $\alpha$, what values of $\hat{p}$ where likely (with them being containted in more $CI_\alpha$ across all $\alpha$), but then figured out that I probably need Bayes theorem.
Intuitively, I would guess that my prior would be uniformly distributed across $[0,1]$ ($U(0,1)$), and that I need the likelihood of the binomial distribution. From what I came across, I understand that I will end up using a Beta distribution in some way, but I am still far from being fluent in Bayesian statistics, so please forgive all my imprecisions and lack of a deeper understanding.
My apologies for the poor choice of words in my original question.
After reading some more, I believe most aspects of my question found their answers here.
The beta distribution is commonly used to model prior and posterior distribution of a proportion/probablity in a Bayesian framework, as its two parameters can be used to represent our knowledge of $X$, i.e. with a prior: $$P(p) \hookrightarrow Beta(\alpha,\beta)$$ where $\dfrac{\alpha}{\alpha+\beta}$ roughly represent what we know/expect would be the proportion of successes and $\dfrac{\beta}{\alpha+\beta}$ the failures, with some nuances and debates regarding what uninformative prior is best, which after observing $X=n$ after $N$ trials leads to the posterior: $$P(p|X = n) \hookrightarrow Beta(\alpha+n,\beta+N-n)$$ which can in turn be updated and so on.