I'm a philosophy student that has had to teach himself probabilities in order to get into formal epistemology. I have never taken a statistics/probabilities course, so I may be missing something very basic.
I'm trying to model the following scenario: you want to know how your beliefs about the probabilities of a certain die being fair/biased—any bias–should evolve after getting a certain number of a certain face of the die in a row.
I'm presenting the problem in the following way:
Suppose you have a die and want to assign probabilities to two hypotheses: $H_{f}:$ "The die is fair" and $H_{b}:$"The die has a bias". Also, suppose your priors are $P(H_{f})=P(H_{b})=0.5$. Let $j\in\mathbb{N}$ be the number of rolls in which you obtain the same outcome ($ace\vee two \vee ... \vee six$) in a row. What are the probabilities of each hypothesis being true, given the rolls $j=x$?
what should be the model for that problem if one follows Bayesian conditionalization?
Clarifications:
The trial is rolling the die. I'm interested in the following scenarios: suppose you rolled the die twice and you got ace,ace or two,two or ... or six,six; those cases are j=2. What should be your new probabilities for the hypotheses $H_{f}$ and $H_{b}$ after seeing any of the cases j=2, j=3, etc.?
I want the most general possible case for $H_{b}$, so I suppose I can just say that it is the complement of $H_{f}$. If all this doesn't make too much sense, I would just like to ask: How do you model the dynamics of your probabilities for the hypotheses that a die is fair or not, given the outcome of a certain number of rolls, in the most general possible way?