Source: p 224, Think: A Compelling Introduction to Philosophy (1 ed, 1999) by Simon Blackburn.
I capitalised miniscules, which the author uses for variables.
I pursue only intuition; please do not answer with formal proofs. Discussing Bayes's Theorem
( $
\Pr(H|E)=\frac{\Pr(E|H)\Pr(H)}{\Pr(E)}
$ ), the author abbreviates 'evidence' to E and 'hypothesis' to H.
Of course, very often it is difficult or impossible to quantify the "prior" probabilities [ $\Pr(H)$ ] with any accuracy. It is important to realize that this need not matter as much as it might seem. Two factors alleviate the problem. First, even if we assign a range to each figure,
[1.] it may be that all ways of calculating the upshot give a sufficiently similar result. And second, it may be that in the face of enough evidence,
[2.] difference of prior opinion gets swamped. Investigators starting with very different antecedent attitudes to $\Pr(H)$ might end up assigning similarly high values to $\Pr(H \mid E)$, when $E$ becomes impressive enough.
My interpretation of the above as overconfident and presumptuous implies my failure to comprehend it; somehow, I am unpersuaded by 1 and 2. Would these please be explained?
2) basically says "If you receive enough evidence, however unconvinced you were at the start, you must eventually become convinced". Since in real life we actually have a lot of data - medical trials, for instance, get run even when we could in principle know the answers already from extant data - we can afford to be quite imprecise with our priors. (Of course, the assumption here is that we have lots of data. If we are trying to extract the truth from very little data, then it becomes very important to have a reasonable prior, because the updating process has so little effect when the data is so short.)
1 seems to be a rephrasing of 2, to me; 2 seems to be the reason why 1 is true.