Updating prior probability in light of new evidence - Does Bayes work in this specific experiment?

37 Views Asked by At

I was pondering about an easy experiment in the framework of bayesian statistics, and reached a point where I seem to have a misunderstanding about the situation.
Consider the following simple setup:

I hand you a bag and tell you that it contains 10 marbles. They are either black or white. However I don't tell you how many black and white marbles it contains respectively. I ask you to give a first guess on the probability of pulling out a white marble, and obviously the best guess you can give with the information I presented to you is that its 50% respectively.

I now ask you to actually pull out a single marble, look at it's color and shove it back inside the bag. Let's say it was a black one.
Now, the question is:

What is the updated probability to pull out a black marble again or a white marble?

Information changes probability. And probability is a measurement of uncertainty. After the "evidence" of the first marble being black, one should be able to make a more educated guess of the probability. Conducting this experiment sufficiently often should make your guess converge to the actual probability.

My take:
P(B) : Probability of pulling out a black marble
P(W) : Probability of pulling out a white marble
P(W|B) : Probability of pulling out a white marble under the condition that the previous marble was black.

Using Bayes Theorem gives:

P(W|B) = P(B|W) * P(W) / P(B)
Now because P(W) = P(B) we get P(W|B) = P(B|W).
This makes complete sense of course since the situation is symmetric, but didn't get me anywhere in updating a posterior probability.
Do you know a way I can update the probability? Is Bayesian statistics the wrong way of doing this?