A long time ago I read about a simplistic estimator for the probability of a binary outcome (say 1 or 0) which takes the form
$$P = \dfrac{n+1}{m+2}$$
where $n$ is the number of "successes" while $m$ is the number of attempts.
So, if you have no information on which binary outcome is more likely ($n = m = 0$), you are forced to assume they're equally likely, each with $P = 0.5$.
If your first attempt has an outcome of $0$, you can then adjust your expected probability of the next attempt also being $0$ to $P_0 = \dfrac{1 + 1}{1 + 2} = \dfrac{2}{3}$.
After the second attempt also outputs $0$, you get $P_0 = \dfrac34$. When your third attempt outputs $1$, it becomes $P_0 = \dfrac35$, etc.
What is this type of estimate (either generally or this specific equation) called? I remembered it as a Bayesian estimator (it certainly describes Bayesian thinking), but after Googling a bit I didn't find anything like this.
This is Bayesian LMS (least mean squares) estimator.
Let's consider coin tossing experiment with probability of Heads $\Theta$ (random variable in Bayesian setup); the number of successes in $n$ tosses is random variable $M$.
We assume that prior is uniform, $\Theta\sim U[0,1]$; then posterior has beta distribution, and the estimator $\widehat{\Theta}$ is
$$\widehat{\Theta}=\mathbb{E}[\Theta|M=m]=\frac{m+1}{n+2}$$
The derivation can be found here